code
stringlengths
2.5k
150k
kind
stringclasses
1 value
git gitweb gitweb ====== Name ---- gitweb - Git web interface (web frontend to Git repositories) Synopsis -------- To get started with gitweb, run [git-instaweb[1]](git-instaweb) from a Git repository. This would configure and start your web server, and run web browser pointing to gitweb. Description ----------- Gitweb provides a web interface to Git repositories. Its features include: * Viewing multiple Git repositories with common root. * Browsing every revision of the repository. * Viewing the contents of files in the repository at any revision. * Viewing the revision log of branches, history of files and directories, see what was changed when, by who. * Viewing the blame/annotation details of any file (if enabled). * Generating RSS and Atom feeds of commits, for any branch. The feeds are auto-discoverable in modern web browsers. * Viewing everything that was changed in a revision, and step through revisions one at a time, viewing the history of the repository. * Finding commits which commit messages matches given search term. See <http://repo.or.cz/w/git.git/tree/HEAD:/gitweb/> for gitweb source code, browsed using gitweb itself. Configuration ------------- Various aspects of gitweb’s behavior can be controlled through the configuration file `gitweb_config.perl` or `/etc/gitweb.conf`. See the [gitweb.conf[5]](gitweb.conf) for details. ### Repositories Gitweb can show information from one or more Git repositories. These repositories have to be all on local filesystem, and have to share common repository root, i.e. be all under a single parent repository (but see also "Advanced web server setup" section, "Webserver configuration with multiple projects' root" subsection). ``` our $projectroot = '/path/to/parent/directory'; ``` The default value for `$projectroot` is `/pub/git`. You can change it during building gitweb via `GITWEB_PROJECTROOT` build configuration variable. By default all Git repositories under `$projectroot` are visible and available to gitweb. The list of projects is generated by default by scanning the `$projectroot` directory for Git repositories (for object databases to be more exact; gitweb is not interested in a working area, and is best suited to showing "bare" repositories). The name of the repository in gitweb is the path to its `$GIT_DIR` (its object database) relative to `$projectroot`. Therefore the repository $repo can be found at "$projectroot/$repo". ### Projects list file format Instead of having gitweb find repositories by scanning filesystem starting from $projectroot, you can provide a pre-generated list of visible projects by setting `$projects_list` to point to a plain text file with a list of projects (with some additional info). This file uses the following format: * One record (for project / repository) per line; does not support line continuation (newline escaping). * Leading and trailing whitespace are ignored. * Whitespace separated fields; any run of whitespace can be used as field separator (rules for Perl’s "`split(" ", $line)`"). * Fields use modified URI encoding, defined in RFC 3986, section 2.1 (Percent-Encoding), or rather "Query string encoding" (see <https://en.wikipedia.org/wiki/Query_string#URL_encoding>), the difference being that SP (" ") can be encoded as "+" (and therefore "+" has to be also percent-encoded). Reserved characters are: "%" (used for encoding), "+" (can be used to encode SPACE), all whitespace characters as defined in Perl, including SP, TAB and LF, (used to separate fields in a record). * Currently recognized fields are: <repository path> path to repository GIT\_DIR, relative to `$projectroot` <repository owner> displayed as repository owner, preferably full name, or email, or both You can generate the projects list index file using the project\_index action (the `TXT` link on projects list page) directly from gitweb; see also "Generating projects list using gitweb" section below. Example contents: ``` foo.git Joe+R+Hacker+<[email protected]> foo/bar.git O+W+Ner+<[email protected]> ``` By default this file controls only which projects are **visible** on projects list page (note that entries that do not point to correctly recognized Git repositories won’t be displayed by gitweb). Even if a project is not visible on projects list page, you can view it nevertheless by hand-crafting a gitweb URL. By setting `$strict_export` configuration variable (see [gitweb.conf[5]](gitweb.conf)) to true value you can allow viewing only of repositories also shown on the overview page (i.e. only projects explicitly listed in projects list file will be accessible). ### Generating projects list using gitweb We assume that GITWEB\_CONFIG has its default Makefile value, namely `gitweb_config.perl`. Put the following in `gitweb_make_index.perl` file: ``` read_config_file("gitweb_config.perl"); $projects_list = $projectroot; ``` Then create the following script to get list of project in the format suitable for GITWEB\_LIST build configuration variable (or `$projects_list` variable in gitweb config): ``` #!/bin/sh export GITWEB_CONFIG="gitweb_make_index.perl" export GATEWAY_INTERFACE="CGI/1.1" export HTTP_ACCEPT="*/*" export REQUEST_METHOD="GET" export QUERY_STRING="a=project_index" perl -- /var/www/cgi-bin/gitweb.cgi ``` Run this script and save its output to a file. This file could then be used as projects list file, which means that you can set `$projects_list` to its filename. ### Controlling access to Git repositories By default all Git repositories under `$projectroot` are visible and available to gitweb. You can however configure how gitweb controls access to repositories. * As described in "Projects list file format" section, you can control which projects are **visible** by selectively including repositories in projects list file, and setting `$projects_list` gitweb configuration variable to point to it. With `$strict_export` set, projects list file can be used to control which repositories are **available** as well. * You can configure gitweb to only list and allow viewing of the explicitly exported repositories, via `$export_ok` variable in gitweb config file; see [gitweb.conf[5]](gitweb.conf) manpage. If it evaluates to true, gitweb shows repositories only if this file named by `$export_ok` exists in its object database (if directory has the magic file named `$export_ok`). For example [git-daemon[1]](git-daemon) by default (unless `--export-all` option is used) allows pulling only for those repositories that have `git-daemon-export-ok` file. Adding ``` our $export_ok = "git-daemon-export-ok"; ``` makes gitweb show and allow access only to those repositories that can be fetched from via `git://` protocol. * Finally, it is possible to specify an arbitrary perl subroutine that will be called for each repository to determine if it can be exported. The subroutine receives an absolute path to the project (repository) as its only parameter (i.e. "$projectroot/$project"). For example, if you use mod\_perl to run the script, and have dumb HTTP protocol authentication configured for your repositories, you can use the following hook to allow access only if the user is authorized to read the files: ``` $export_auth_hook = sub { use Apache2::SubRequest (); use Apache2::Const -compile => qw(HTTP_OK); my $path = "$_[0]/HEAD"; my $r = Apache2::RequestUtil->request; my $sub = $r->lookup_file($path); return $sub->filename eq $path && $sub->status == Apache2::Const::HTTP_OK; }; ``` ### Per-repository gitweb configuration You can configure individual repositories shown in gitweb by creating file in the `GIT_DIR` of Git repository, or by setting some repo configuration variable (in `GIT_DIR/config`, see [git-config[1]](git-config)). You can use the following files in repository: README.html A html file (HTML fragment) which is included on the gitweb project "summary" page inside `<div>` block element. You can use it for longer description of a project, to provide links (for example to project’s homepage), etc. This is recognized only if XSS prevention is off (`$prevent_xss` is false, see [gitweb.conf[5]](gitweb.conf)); a way to include a README safely when XSS prevention is on may be worked out in the future. description (or `gitweb.description`) Short (shortened to `$projects_list_description_width` in the projects list page, which is 25 characters by default; see [gitweb.conf[5]](gitweb.conf)) single line description of a project (of a repository). Plain text file; HTML will be escaped. By default set to ``` Unnamed repository; edit this file to name it for gitweb. ``` from the template during repository creation, usually installed in `/usr/share/git-core/templates/`. You can use the `gitweb.description` repo configuration variable, but the file takes precedence. category (or `gitweb.category`) Singe line category of a project, used to group projects if `$projects_list_group_categories` is enabled. By default (file and configuration variable absent), uncategorized projects are put in the `$project_list_default_category` category. You can use the `gitweb.category` repo configuration variable, but the file takes precedence. The configuration variables `$projects_list_group_categories` and `$project_list_default_category` are described in [gitweb.conf[5]](gitweb.conf) cloneurl (or multiple-valued `gitweb.url`) File with repository URL (used for clone and fetch), one per line. Displayed in the project summary page. You can use multiple-valued `gitweb.url` repository configuration variable for that, but the file takes precedence. This is per-repository enhancement / version of global prefix-based `@git_base_url_list` gitweb configuration variable (see [gitweb.conf[5]](gitweb.conf)). gitweb.owner You can use the `gitweb.owner` repository configuration variable to set repository’s owner. It is displayed in the project list and summary page. If it’s not set, filesystem directory’s owner is used (via GECOS field, i.e. real name field from **getpwuid**(3)) if `$projects_list` is unset (gitweb scans `$projectroot` for repositories); if `$projects_list` points to file with list of repositories, then project owner defaults to value from this file for given repository. various `gitweb.*` config variables (in config) Read description of `%feature` hash for detailed list, and descriptions. See also "Configuring gitweb features" section in [gitweb.conf[5]](gitweb.conf) Actions, and urls ----------------- Gitweb can use path\_info (component) based URLs, or it can pass all necessary information via query parameters. The typical gitweb URLs are broken down in to five components: ``` .../gitweb.cgi/<repo>/<action>/<revision>:/<path>?<arguments> ``` repo The repository the action will be performed on. All actions except for those that list all available projects, in whatever form, require this parameter. action The action that will be run. Defaults to `projects_list` if repo is not set, and to `summary` otherwise. revision Revision shown. Defaults to HEAD. path The path within the <repository> that the action is performed on, for those actions that require it. arguments Any arguments that control the behaviour of the action. Some actions require or allow to specify two revisions, and sometimes even two pathnames. In most general form such path\_info (component) based gitweb URL looks like this: ``` .../gitweb.cgi/<repo>/<action>/<revision_from>:/<path_from>..<revision_to>:/<path_to>?<arguments> ``` Each action is implemented as a subroutine, and must be present in %actions hash. Some actions are disabled by default, and must be turned on via feature mechanism. For example to enable `blame` view add the following to gitweb configuration file: ``` $feature{'blame'}{'default'} = [1]; ``` ### Actions: The standard actions are: project\_list Lists the available Git repositories. This is the default command if no repository is specified in the URL. summary Displays summary about given repository. This is the default command if no action is specified in URL, and only repository is specified. heads remotes Lists all local or all remote-tracking branches in given repository. The latter is not available by default, unless configured. tags List all tags (lightweight and annotated) in given repository. blob tree Shows the files and directories in a given repository path, at given revision. This is default command if no action is specified in the URL, and path is given. blob\_plain Returns the raw data for the file in given repository, at given path and revision. Links to this action are marked `raw`. blobdiff Shows the difference between two revisions of the same file. blame blame\_incremental Shows the blame (also called annotation) information for a file. On a per line basis it shows the revision in which that line was last changed and the user that committed the change. The incremental version (which if configured is used automatically when JavaScript is enabled) uses Ajax to incrementally add blame info to the contents of given file. This action is disabled by default for performance reasons. commit commitdiff Shows information about a specific commit in a repository. The `commit` view shows information about commit in more detail, the `commitdiff` action shows changeset for given commit. patch Returns the commit in plain text mail format, suitable for applying with [git-am[1]](git-am). tag Display specific annotated tag (tag object). log shortlog Shows log information (commit message or just commit subject) for a given branch (starting from given revision). The `shortlog` view is more compact; it shows one commit per line. history Shows history of the file or directory in a given repository path, starting from given revision (defaults to HEAD, i.e. default branch). This view is similar to `shortlog` view. rss atom Generates an RSS (or Atom) feed of changes to repository. Webserver configuration ----------------------- This section explains how to configure some common webservers to run gitweb. In all cases, `/path/to/gitweb` in the examples is the directory you ran installed gitweb in, and contains `gitweb_config.perl`. If you’ve configured a web server that isn’t listed here for gitweb, please send in the instructions so they can be included in a future release. ### Apache as CGI Apache must be configured to support CGI scripts in the directory in which gitweb is installed. Let’s assume that it is `/var/www/cgi-bin` directory. ``` ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" <Directory "/var/www/cgi-bin"> Options Indexes FollowSymlinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ``` With that configuration the full path to browse repositories would be: ``` http://server/cgi-bin/gitweb.cgi ``` ### Apache with mod\_perl, via ModPerl::Registry You can use mod\_perl with gitweb. You must install Apache::Registry (for mod\_perl 1.x) or ModPerl::Registry (for mod\_perl 2.x) to enable this support. Assuming that gitweb is installed to `/var/www/perl`, the following Apache configuration (for mod\_perl 2.x) is suitable. ``` Alias /perl "/var/www/perl" <Directory "/var/www/perl"> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options Indexes FollowSymlinks +ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ``` With that configuration the full path to browse repositories would be: ``` http://server/perl/gitweb.cgi ``` ### Apache with FastCGI Gitweb works with Apache and FastCGI. First you need to rename, copy or symlink gitweb.cgi to gitweb.fcgi. Let’s assume that gitweb is installed in `/usr/share/gitweb` directory. The following Apache configuration is suitable (UNTESTED!) ``` FastCgiServer /usr/share/gitweb/gitweb.cgi ScriptAlias /gitweb /usr/share/gitweb/gitweb.cgi Alias /gitweb/static /usr/share/gitweb/static <Directory /usr/share/gitweb/static> SetHandler default-handler </Directory> ``` With that configuration the full path to browse repositories would be: ``` http://server/gitweb ``` Advanced web server setup ------------------------- All of those examples use request rewriting, and need `mod_rewrite` (or equivalent; examples below are written for Apache). ### Single URL for gitweb and for fetching If you want to have one URL for both gitweb and your `http://` repositories, you can configure Apache like this: ``` <VirtualHost *:80> ServerName git.example.org DocumentRoot /pub/git SetEnv GITWEB_CONFIG /etc/gitweb.conf # turning on mod rewrite RewriteEngine on # make the front page an internal rewrite to the gitweb script RewriteRule ^/$ /cgi-bin/gitweb.cgi # make access for "dumb clients" work RewriteRule ^/(.*\.git/(?!/?(HEAD|info|objects|refs)).*)?$ \ /cgi-bin/gitweb.cgi%{REQUEST_URI} [L,PT] </VirtualHost> ``` The above configuration expects your public repositories to live under `/pub/git` and will serve them as `http://git.domain.org/dir-under-pub-git`, both as clonable Git URL and as browseable gitweb interface. If you then start your [git-daemon[1]](git-daemon) with `--base-path=/pub/git --export-all` then you can even use the `git://` URL with exactly the same path. Setting the environment variable `GITWEB_CONFIG` will tell gitweb to use the named file (i.e. in this example `/etc/gitweb.conf`) as a configuration for gitweb. You don’t really need it in above example; it is required only if your configuration file is in different place than built-in (during compiling gitweb) `gitweb_config.perl` or `/etc/gitweb.conf`. See [gitweb.conf[5]](gitweb.conf) for details, especially information about precedence rules. If you use the rewrite rules from the example you **might** also need something like the following in your gitweb configuration file (`/etc/gitweb.conf` following example): ``` @stylesheets = ("/some/absolute/path/gitweb.css"); $my_uri = "/"; $home_link = "/"; $per_request_config = 1; ``` Nowadays though gitweb should create HTML base tag when needed (to set base URI for relative links), so it should work automatically. ### Webserver configuration with multiple projects' root If you want to use gitweb with several project roots you can edit your Apache virtual host and gitweb configuration files in the following way. The virtual host configuration (in Apache configuration file) should look like this: ``` <VirtualHost *:80> ServerName git.example.org DocumentRoot /pub/git SetEnv GITWEB_CONFIG /etc/gitweb.conf # turning on mod rewrite RewriteEngine on # make the front page an internal rewrite to the gitweb script RewriteRule ^/$ /cgi-bin/gitweb.cgi [QSA,L,PT] # look for a public_git directory in unix users' home # http://git.example.org/~<user>/ RewriteRule ^/\~([^\/]+)(/|/gitweb.cgi)?$ /cgi-bin/gitweb.cgi \ [QSA,E=GITWEB_PROJECTROOT:/home/$1/public_git/,L,PT] # http://git.example.org/+<user>/ #RewriteRule ^/\+([^\/]+)(/|/gitweb.cgi)?$ /cgi-bin/gitweb.cgi \ [QSA,E=GITWEB_PROJECTROOT:/home/$1/public_git/,L,PT] # http://git.example.org/user/<user>/ #RewriteRule ^/user/([^\/]+)/(gitweb.cgi)?$ /cgi-bin/gitweb.cgi \ [QSA,E=GITWEB_PROJECTROOT:/home/$1/public_git/,L,PT] # defined list of project roots RewriteRule ^/scm(/|/gitweb.cgi)?$ /cgi-bin/gitweb.cgi \ [QSA,E=GITWEB_PROJECTROOT:/pub/scm/,L,PT] RewriteRule ^/var(/|/gitweb.cgi)?$ /cgi-bin/gitweb.cgi \ [QSA,E=GITWEB_PROJECTROOT:/var/git/,L,PT] # make access for "dumb clients" work RewriteRule ^/(.*\.git/(?!/?(HEAD|info|objects|refs)).*)?$ \ /cgi-bin/gitweb.cgi%{REQUEST_URI} [L,PT] </VirtualHost> ``` Here actual project root is passed to gitweb via `GITWEB_PROJECT_ROOT` environment variable from a web server, so you need to put the following line in gitweb configuration file (`/etc/gitweb.conf` in above example): ``` $projectroot = $ENV{'GITWEB_PROJECTROOT'} || "/pub/git"; ``` **Note** that this requires to be set for each request, so either `$per_request_config` must be false, or the above must be put in code referenced by `$per_request_config`; These configurations enable two things. First, each unix user (`<user>`) of the server will be able to browse through gitweb Git repositories found in `~/public_git/` with the following url: ``` http://git.example.org/~<user>/ ``` If you do not want this feature on your server just remove the second rewrite rule. If you already use `mod_userdir` in your virtual host or you don’t want to use the '~' as first character, just comment or remove the second rewrite rule, and uncomment one of the following according to what you want. Second, repositories found in `/pub/scm/` and `/var/git/` will be accessible through `http://git.example.org/scm/` and `http://git.example.org/var/`. You can add as many project roots as you want by adding rewrite rules like the third and the fourth. ### PATH\_INFO usage If you enable PATH\_INFO usage in gitweb by putting ``` $feature{'pathinfo'}{'default'} = [1]; ``` in your gitweb configuration file, it is possible to set up your server so that it consumes and produces URLs in the form ``` http://git.example.com/project.git/shortlog/sometag ``` i.e. without `gitweb.cgi` part, by using a configuration such as the following. This configuration assumes that `/var/www/gitweb` is the DocumentRoot of your webserver, contains the gitweb.cgi script and complementary static files (stylesheet, favicon, JavaScript): ``` <VirtualHost *:80> ServerAlias git.example.com DocumentRoot /var/www/gitweb <Directory /var/www/gitweb> Options ExecCGI AddHandler cgi-script cgi DirectoryIndex gitweb.cgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^.* /gitweb.cgi/$0 [L,PT] </Directory> </VirtualHost> ``` The rewrite rule guarantees that existing static files will be properly served, whereas any other URL will be passed to gitweb as PATH\_INFO parameter. **Notice** that in this case you don’t need special settings for `@stylesheets`, `$my_uri` and `$home_link`, but you lose "dumb client" access to your project .git dirs (described in "Single URL for gitweb and for fetching" section). A possible workaround for the latter is the following: in your project root dir (e.g. `/pub/git`) have the projects named **without** a .git extension (e.g. `/pub/git/project` instead of `/pub/git/project.git`) and configure Apache as follows: ``` <VirtualHost *:80> ServerAlias git.example.com DocumentRoot /var/www/gitweb AliasMatch ^(/.*?)(\.git)(/.*)?$ /pub/git$1$3 <Directory /var/www/gitweb> Options ExecCGI AddHandler cgi-script cgi DirectoryIndex gitweb.cgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^.* /gitweb.cgi/$0 [L,PT] </Directory> </VirtualHost> ``` The additional AliasMatch makes it so that ``` http://git.example.com/project.git ``` will give raw access to the project’s Git dir (so that the project can be cloned), while ``` http://git.example.com/project ``` will provide human-friendly gitweb access. This solution is not 100% bulletproof, in the sense that if some project has a named ref (branch, tag) starting with `git/`, then paths such as ``` http://git.example.com/project/command/abranch..git/abranch ``` will fail with a 404 error. Bugs ---- Please report any bugs or feature requests to [[email protected]](mailto:[email protected]), putting "gitweb" in the subject of email. See also -------- [gitweb.conf[5]](gitweb.conf), [git-instaweb[1]](git-instaweb) `gitweb/README`, `gitweb/INSTALL`
programming_docs
git git-rev-list git-rev-list ============ Name ---- git-rev-list - Lists commit objects in reverse chronological order Synopsis -------- ``` git rev-list [<options>] <commit>…​ [--] [<path>…​] ``` Description ----------- List commits that are reachable by following the `parent` links from the given commit(s), but exclude commits that are reachable from the one(s) given with a `^` in front of them. The output is given in reverse chronological order by default. You can think of this as a set operation. Commits reachable from any of the commits given on the command line form a set, and then commits reachable from any of the ones given with `^` in front are subtracted from that set. The remaining commits are what comes out in the command’s output. Various other options and paths parameters can be used to further limit the result. Thus, the following command: ``` $ git rev-list foo bar ^baz ``` means "list all the commits which are reachable from `foo` or `bar`, but not from `baz`". A special notation "`<commit1>`..`<commit2>`" can be used as a short-hand for "^`<commit1>` `<commit2>`". For example, either of the following may be used interchangeably: ``` $ git rev-list origin..HEAD $ git rev-list HEAD ^origin ``` Another special notation is "`<commit1>`…​`<commit2>`" which is useful for merges. The resulting set of commits is the symmetric difference between the two operands. The following two commands are equivalent: ``` $ git rev-list A B --not $(git merge-base --all A B) $ git rev-list A...B ``` `rev-list` is a very essential Git command, since it provides the ability to build and traverse commit ancestry graphs. For this reason, it has a lot of different options that enables it to be used by commands as different as `git bisect` and `git repack`. Options ------- ### Commit Limiting Besides specifying a range of commits that should be listed using the special notations explained in the description, additional commit limiting may be applied. Using more options generally further limits the output (e.g. `--since=<date1>` limits to commits newer than `<date1>`, and using it with `--grep=<pattern>` further limits to commits whose log message has a line that matches `<pattern>`), unless otherwise noted. Note that these are applied before commit ordering and formatting options, such as `--reverse`. -<number> -n <number> --max-count=<number> Limit the number of commits to output. --skip=<number> Skip `number` commits before starting to show the commit output. --since=<date> --after=<date> Show commits more recent than a specific date. --since-as-filter=<date> Show all commits more recent than a specific date. This visits all commits in the range, rather than stopping at the first commit which is older than a specific date. --until=<date> --before=<date> Show commits older than a specific date. --max-age=<timestamp> --min-age=<timestamp> Limit the commits output to specified time range. --author=<pattern> --committer=<pattern> Limit the commits output to ones with author/committer header lines that match the specified pattern (regular expression). With more than one `--author=<pattern>`, commits whose author matches any of the given patterns are chosen (similarly for multiple `--committer=<pattern>`). --grep-reflog=<pattern> Limit the commits output to ones with reflog entries that match the specified pattern (regular expression). With more than one `--grep-reflog`, commits whose reflog message matches any of the given patterns are chosen. It is an error to use this option unless `--walk-reflogs` is in use. --grep=<pattern> Limit the commits output to ones with log message that matches the specified pattern (regular expression). With more than one `--grep=<pattern>`, commits whose message matches any of the given patterns are chosen (but see `--all-match`). --all-match Limit the commits output to ones that match all given `--grep`, instead of ones that match at least one. --invert-grep Limit the commits output to ones with log message that do not match the pattern specified with `--grep=<pattern>`. -i --regexp-ignore-case Match the regular expression limiting patterns without regard to letter case. --basic-regexp Consider the limiting patterns to be basic regular expressions; this is the default. -E --extended-regexp Consider the limiting patterns to be extended regular expressions instead of the default basic regular expressions. -F --fixed-strings Consider the limiting patterns to be fixed strings (don’t interpret pattern as a regular expression). -P --perl-regexp Consider the limiting patterns to be Perl-compatible regular expressions. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. --remove-empty Stop when a given path disappears from the tree. --merges Print only merge commits. This is exactly the same as `--min-parents=2`. --no-merges Do not print commits with more than one parent. This is exactly the same as `--max-parents=1`. --min-parents=<number> --max-parents=<number> --no-min-parents --no-max-parents Show only commits which have at least (or at most) that many parent commits. In particular, `--max-parents=1` is the same as `--no-merges`, `--min-parents=2` is the same as `--merges`. `--max-parents=0` gives all root commits and `--min-parents=3` all octopus merges. `--no-min-parents` and `--no-max-parents` reset these limits (to no limit) again. Equivalent forms are `--min-parents=0` (any commit has 0 or more parents) and `--max-parents=-1` (negative numbers denote no upper limit). --first-parent When finding commits to include, follow only the first parent commit upon seeing a merge commit. This option can give a better overview when viewing the evolution of a particular topic branch, because merges into a topic branch tend to be only about adjusting to updated upstream from time to time, and this option allows you to ignore the individual commits brought in to your history by such a merge. --exclude-first-parent-only When finding commits to exclude (with a `^`), follow only the first parent commit upon seeing a merge commit. This can be used to find the set of changes in a topic branch from the point where it diverged from the remote branch, given that arbitrary merges can be valid topic branch changes. --not Reverses the meaning of the `^` prefix (or lack thereof) for all following revision specifiers, up to the next `--not`. --all Pretend as if all the refs in `refs/`, along with `HEAD`, are listed on the command line as `<commit>`. --branches[=<pattern>] Pretend as if all the refs in `refs/heads` are listed on the command line as `<commit>`. If `<pattern>` is given, limit branches to ones matching given shell glob. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --tags[=<pattern>] Pretend as if all the refs in `refs/tags` are listed on the command line as `<commit>`. If `<pattern>` is given, limit tags to ones matching given shell glob. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --remotes[=<pattern>] Pretend as if all the refs in `refs/remotes` are listed on the command line as `<commit>`. If `<pattern>` is given, limit remote-tracking branches to ones matching given shell glob. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --glob=<glob-pattern> Pretend as if all the refs matching shell glob `<glob-pattern>` are listed on the command line as `<commit>`. Leading `refs/`, is automatically prepended if missing. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --exclude=<glob-pattern> Do not include refs matching `<glob-pattern>` that the next `--all`, `--branches`, `--tags`, `--remotes`, or `--glob` would otherwise consider. Repetitions of this option accumulate exclusion patterns up to the next `--all`, `--branches`, `--tags`, `--remotes`, or `--glob` option (other options or arguments do not clear accumulated patterns). The patterns given should not begin with `refs/heads`, `refs/tags`, or `refs/remotes` when applied to `--branches`, `--tags`, or `--remotes`, respectively, and they must begin with `refs/` when applied to `--glob` or `--all`. If a trailing `/*` is intended, it must be given explicitly. --exclude-hidden=[receive|uploadpack] Do not include refs that would be hidden by `git-receive-pack` or `git-upload-pack` by consulting the appropriate `receive.hideRefs` or `uploadpack.hideRefs` configuration along with `transfer.hideRefs` (see [git-config[1]](git-config)). This option affects the next pseudo-ref option `--all` or `--glob` and is cleared after processing them. --reflog Pretend as if all objects mentioned by reflogs are listed on the command line as `<commit>`. --alternate-refs Pretend as if all objects mentioned as ref tips of alternate repositories were listed on the command line. An alternate repository is any repository whose object directory is specified in `objects/info/alternates`. The set of included objects may be modified by `core.alternateRefsCommand`, etc. See [git-config[1]](git-config). --single-worktree By default, all working trees will be examined by the following options when there are more than one (see [git-worktree[1]](git-worktree)): `--all`, `--reflog` and `--indexed-objects`. This option forces them to examine the current working tree only. --ignore-missing Upon seeing an invalid object name in the input, pretend as if the bad input was not given. --stdin In addition to the `<commit>` listed on the command line, read them from the standard input. If a `--` separator is seen, stop reading commits and start reading paths to limit the result. --quiet Don’t print anything to standard output. This form is primarily meant to allow the caller to test the exit status to see if a range of objects is fully connected (or not). It is faster than redirecting stdout to `/dev/null` as the output does not have to be formatted. --disk-usage --disk-usage=human Suppress normal output; instead, print the sum of the bytes used for on-disk storage by the selected commits or objects. This is equivalent to piping the output into `git cat-file --batch-check='%(objectsize:disk)'`, except that it runs much faster (especially with `--use-bitmap-index`). See the `CAVEATS` section in [git-cat-file[1]](git-cat-file) for the limitations of what "on-disk storage" means. With the optional value `human`, on-disk storage size is shown in human-readable string(e.g. 12.24 Kib, 3.50 Mib). --cherry-mark Like `--cherry-pick` (see below) but mark equivalent commits with `=` rather than omitting them, and inequivalent ones with `+`. --cherry-pick Omit any commit that introduces the same change as another commit on the “other side” when the set of commits are limited with symmetric difference. For example, if you have two branches, `A` and `B`, a usual way to list all commits on only one side of them is with `--left-right` (see the example below in the description of the `--left-right` option). However, it shows the commits that were cherry-picked from the other branch (for example, “3rd on b” may be cherry-picked from branch A). With this option, such pairs of commits are excluded from the output. --left-only --right-only List only commits on the respective side of a symmetric difference, i.e. only those which would be marked `<` resp. `>` by `--left-right`. For example, `--cherry-pick --right-only A...B` omits those commits from `B` which are in `A` or are patch-equivalent to a commit in `A`. In other words, this lists the `+` commits from `git cherry A B`. More precisely, `--cherry-pick --right-only --no-merges` gives the exact list. --cherry A synonym for `--right-only --cherry-mark --no-merges`; useful to limit the output to the commits on our side and mark those that have been applied to the other side of a forked history with `git log --cherry upstream...mybranch`, similar to `git cherry upstream mybranch`. -g --walk-reflogs Instead of walking the commit ancestry chain, walk reflog entries from the most recent one to older ones. When this option is used you cannot specify commits to exclude (that is, `^commit`, `commit1..commit2`, and `commit1...commit2` notations cannot be used). With `--pretty` format other than `oneline` and `reference` (for obvious reasons), this causes the output to have two extra lines of information taken from the reflog. The reflog designator in the output may be shown as `ref@{Nth}` (where `Nth` is the reverse-chronological index in the reflog) or as `ref@{timestamp}` (with the timestamp for that entry), depending on a few rules: 1. If the starting point is specified as `ref@{Nth}`, show the index format. 2. If the starting point was specified as `ref@{now}`, show the timestamp format. 3. If neither was used, but `--date` was given on the command line, show the timestamp in the format requested by `--date`. 4. Otherwise, show the index format. Under `--pretty=oneline`, the commit message is prefixed with this information on the same line. This option cannot be combined with `--reverse`. See also [git-reflog[1]](git-reflog). Under `--pretty=reference`, this information will not be shown at all. --merge After a failed merge, show refs that touch files having a conflict and don’t exist on all heads to merge. --boundary Output excluded boundary commits. Boundary commits are prefixed with `-`. --use-bitmap-index Try to speed up the traversal using the pack bitmap index (if one is available). Note that when traversing with `--objects`, trees and blobs will not have their associated path printed. --progress=<header> Show progress reports on stderr as objects are considered. The `<header>` text will be printed with each progress update. ### History Simplification Sometimes you are only interested in parts of the history, for example the commits modifying a particular <path>. But there are two parts of `History Simplification`, one part is selecting the commits and the other is how to do it, as there are various strategies to simplify the history. The following options select the commits to be shown: <paths> Commits modifying the given <paths> are selected. --simplify-by-decoration Commits that are referred by some branch or tag are selected. Note that extra commits can be shown to give a meaningful history. The following options affect the way the simplification is performed: Default mode Simplifies the history to the simplest history explaining the final state of the tree. Simplest because it prunes some side branches if the end result is the same (i.e. merging branches with the same content) --show-pulls Include all commits from the default mode, but also any merge commits that are not TREESAME to the first parent but are TREESAME to a later parent. This mode is helpful for showing the merge commits that "first introduced" a change to a branch. --full-history Same as the default mode, but does not prune some history. --dense Only the selected commits are shown, plus some to have a meaningful history. --sparse All commits in the simplified history are shown. --simplify-merges Additional option to `--full-history` to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge. --ancestry-path[=<commit>] When given a range of commits to display (e.g. `commit1..commit2` or `commit2 ^commit1`), only display commits in that range that are ancestors of <commit>, descendants of <commit>, or <commit> itself. If no commit is specified, use `commit1` (the excluded part of the range) as <commit>. Can be passed multiple times; if so, a commit is included if it is any of the commits given or if it is an ancestor or descendant of one of them. A more detailed explanation follows. Suppose you specified `foo` as the <paths>. We shall call commits that modify `foo` !TREESAME, and the rest TREESAME. (In a diff filtered for `foo`, they look different and equal, respectively.) In the following, we will always refer to the same example history to illustrate the differences between simplification settings. We assume that you are filtering for a file `foo` in this commit graph: ``` .-A---M---N---O---P---Q / / / / / / I B C D E Y \ / / / / / `-------------' X ``` The horizontal line of history A---Q is taken to be the first parent of each merge. The commits are: * `I` is the initial commit, in which `foo` exists with contents “asdf”, and a file `quux` exists with contents “quux”. Initial commits are compared to an empty tree, so `I` is !TREESAME. * In `A`, `foo` contains just “foo”. * `B` contains the same change as `A`. Its merge `M` is trivial and hence TREESAME to all parents. * `C` does not change `foo`, but its merge `N` changes it to “foobar”, so it is not TREESAME to any parent. * `D` sets `foo` to “baz”. Its merge `O` combines the strings from `N` and `D` to “foobarbaz”; i.e., it is not TREESAME to any parent. * `E` changes `quux` to “xyzzy”, and its merge `P` combines the strings to “quux xyzzy”. `P` is TREESAME to `O`, but not to `E`. * `X` is an independent root commit that added a new file `side`, and `Y` modified it. `Y` is TREESAME to `X`. Its merge `Q` added `side` to `P`, and `Q` is TREESAME to `P`, but not to `Y`. `rev-list` walks backwards through history, including or excluding commits based on whether `--full-history` and/or parent rewriting (via `--parents` or `--children`) are used. The following settings are available. Default mode Commits are included if they are not TREESAME to any parent (though this can be changed, see `--sparse` below). If the commit was a merge, and it was TREESAME to one parent, follow only that parent. (Even if there are several TREESAME parents, follow only one of them.) Otherwise, follow all parents. This results in: ``` .-A---N---O / / / I---------D ``` Note how the rule to only follow the TREESAME parent, if one is available, removed `B` from consideration entirely. `C` was considered via `N`, but is TREESAME. Root commits are compared to an empty tree, so `I` is !TREESAME. Parent/child relations are only visible with `--parents`, but that does not affect the commits selected in default mode, so we have shown the parent lines. --full-history without parent rewriting This mode differs from the default in one point: always follow all parents of a merge, even if it is TREESAME to one of them. Even if more than one side of the merge has commits that are included, this does not imply that the merge itself is! In the example, we get ``` I A B N D O P Q ``` `M` was excluded because it is TREESAME to both parents. `E`, `C` and `B` were all walked, but only `B` was !TREESAME, so the others do not appear. Note that without parent rewriting, it is not really possible to talk about the parent/child relationships between the commits, so we show them disconnected. --full-history with parent rewriting Ordinary commits are only included if they are !TREESAME (though this can be changed, see `--sparse` below). Merges are always included. However, their parent list is rewritten: Along each parent, prune away commits that are not included themselves. This results in ``` .-A---M---N---O---P---Q / / / / / I B / D / \ / / / / `-------------' ``` Compare to `--full-history` without rewriting above. Note that `E` was pruned away because it is TREESAME, but the parent list of P was rewritten to contain `E`'s parent `I`. The same happened for `C` and `N`, and `X`, `Y` and `Q`. In addition to the above settings, you can change whether TREESAME affects inclusion: --dense Commits that are walked are included if they are not TREESAME to any parent. --sparse All commits that are walked are included. Note that without `--full-history`, this still simplifies merges: if one of the parents is TREESAME, we follow only that one, so the other sides of the merge are never walked. --simplify-merges First, build a history graph in the same way that `--full-history` with parent rewriting does (see above). Then simplify each commit `C` to its replacement `C'` in the final history according to the following rules: * Set `C'` to `C`. * Replace each parent `P` of `C'` with its simplification `P'`. In the process, drop parents that are ancestors of other parents or that are root commits TREESAME to an empty tree, and remove duplicates, but take care to never drop all parents that we are TREESAME to. * If after this parent rewriting, `C'` is a root or merge commit (has zero or >1 parents), a boundary commit, or !TREESAME, it remains. Otherwise, it is replaced with its only parent. The effect of this is best shown by way of comparing to `--full-history` with parent rewriting. The example turns into: ``` .-A---M---N---O / / / I B D \ / / `---------' ``` Note the major differences in `N`, `P`, and `Q` over `--full-history`: * `N`'s parent list had `I` removed, because it is an ancestor of the other parent `M`. Still, `N` remained because it is !TREESAME. * `P`'s parent list similarly had `I` removed. `P` was then removed completely, because it had one parent and is TREESAME. * `Q`'s parent list had `Y` simplified to `X`. `X` was then removed, because it was a TREESAME root. `Q` was then removed completely, because it had one parent and is TREESAME. There is another simplification mode available: --ancestry-path[=<commit>] Limit the displayed commits to those which are an ancestor of <commit>, or which are a descendant of <commit>, or are <commit> itself. As an example use case, consider the following commit history: ``` D---E-------F / \ \ B---C---G---H---I---J / \ A-------K---------------L--M ``` A regular `D..M` computes the set of commits that are ancestors of `M`, but excludes the ones that are ancestors of `D`. This is useful to see what happened to the history leading to `M` since `D`, in the sense that “what does `M` have that did not exist in `D`”. The result in this example would be all the commits, except `A` and `B` (and `D` itself, of course). When we want to find out what commits in `M` are contaminated with the bug introduced by `D` and need fixing, however, we might want to view only the subset of `D..M` that are actually descendants of `D`, i.e. excluding `C` and `K`. This is exactly what the `--ancestry-path` option does. Applied to the `D..M` range, it results in: ``` E-------F \ \ G---H---I---J \ L--M ``` We can also use `--ancestry-path=D` instead of `--ancestry-path` which means the same thing when applied to the `D..M` range but is just more explicit. If we instead are interested in a given topic within this range, and all commits affected by that topic, we may only want to view the subset of `D..M` which contain that topic in their ancestry path. So, using `--ancestry-path=H D..M` for example would result in: ``` E \ G---H---I---J \ L--M ``` Whereas `--ancestry-path=K D..M` would result in ``` K---------------L--M ``` Before discussing another option, `--show-pulls`, we need to create a new example history. A common problem users face when looking at simplified history is that a commit they know changed a file somehow does not appear in the file’s simplified history. Let’s demonstrate a new example and show how options such as `--full-history` and `--simplify-merges` works in that case: ``` .-A---M-----C--N---O---P / / \ \ \/ / / I B \ R-'`-Z' / \ / \/ / \ / /\ / `---X--' `---Y--' ``` For this example, suppose `I` created `file.txt` which was modified by `A`, `B`, and `X` in different ways. The single-parent commits `C`, `Z`, and `Y` do not change `file.txt`. The merge commit `M` was created by resolving the merge conflict to include both changes from `A` and `B` and hence is not TREESAME to either. The merge commit `R`, however, was created by ignoring the contents of `file.txt` at `M` and taking only the contents of `file.txt` at `X`. Hence, `R` is TREESAME to `X` but not `M`. Finally, the natural merge resolution to create `N` is to take the contents of `file.txt` at `R`, so `N` is TREESAME to `R` but not `C`. The merge commits `O` and `P` are TREESAME to their first parents, but not to their second parents, `Z` and `Y` respectively. When using the default mode, `N` and `R` both have a TREESAME parent, so those edges are walked and the others are ignored. The resulting history graph is: ``` I---X ``` When using `--full-history`, Git walks every edge. This will discover the commits `A` and `B` and the merge `M`, but also will reveal the merge commits `O` and `P`. With parent rewriting, the resulting graph is: ``` .-A---M--------N---O---P / / \ \ \/ / / I B \ R-'`--' / \ / \/ / \ / /\ / `---X--' `------' ``` Here, the merge commits `O` and `P` contribute extra noise, as they did not actually contribute a change to `file.txt`. They only merged a topic that was based on an older version of `file.txt`. This is a common issue in repositories using a workflow where many contributors work in parallel and merge their topic branches along a single trunk: many unrelated merges appear in the `--full-history` results. When using the `--simplify-merges` option, the commits `O` and `P` disappear from the results. This is because the rewritten second parents of `O` and `P` are reachable from their first parents. Those edges are removed and then the commits look like single-parent commits that are TREESAME to their parent. This also happens to the commit `N`, resulting in a history view as follows: ``` .-A---M--. / / \ I B R \ / / \ / / `---X--' ``` In this view, we see all of the important single-parent changes from `A`, `B`, and `X`. We also see the carefully-resolved merge `M` and the not-so-carefully-resolved merge `R`. This is usually enough information to determine why the commits `A` and `B` "disappeared" from history in the default view. However, there are a few issues with this approach. The first issue is performance. Unlike any previous option, the `--simplify-merges` option requires walking the entire commit history before returning a single result. This can make the option difficult to use for very large repositories. The second issue is one of auditing. When many contributors are working on the same repository, it is important which merge commits introduced a change into an important branch. The problematic merge `R` above is not likely to be the merge commit that was used to merge into an important branch. Instead, the merge `N` was used to merge `R` and `X` into the important branch. This commit may have information about why the change `X` came to override the changes from `A` and `B` in its commit message. --show-pulls In addition to the commits shown in the default history, show each merge commit that is not TREESAME to its first parent but is TREESAME to a later parent. When a merge commit is included by `--show-pulls`, the merge is treated as if it "pulled" the change from another branch. When using `--show-pulls` on this example (and no other options) the resulting graph is: ``` I---X---R---N ``` Here, the merge commits `R` and `N` are included because they pulled the commits `X` and `R` into the base branch, respectively. These merges are the reason the commits `A` and `B` do not appear in the default history. When `--show-pulls` is paired with `--simplify-merges`, the graph includes all of the necessary information: ``` .-A---M--. N / / \ / I B R \ / / \ / / `---X--' ``` Notice that since `M` is reachable from `R`, the edge from `N` to `M` was simplified away. However, `N` still appears in the history as an important commit because it "pulled" the change `R` into the main branch. The `--simplify-by-decoration` option allows you to view only the big picture of the topology of the history, by omitting commits that are not referenced by tags. Commits are marked as !TREESAME (in other words, kept after history simplification rules described above) if (1) they are referenced by tags, or (2) they change the contents of the paths given on the command line. All other commits are marked as TREESAME (subject to be simplified away). ### Bisection Helpers --bisect Limit output to the one commit object which is roughly halfway between included and excluded commits. Note that the bad bisection ref `refs/bisect/bad` is added to the included commits (if it exists) and the good bisection refs `refs/bisect/good-*` are added to the excluded commits (if they exist). Thus, supposing there are no refs in `refs/bisect/`, if ``` $ git rev-list --bisect foo ^bar ^baz ``` outputs `midpoint`, the output of the two commands ``` $ git rev-list foo ^midpoint $ git rev-list midpoint ^bar ^baz ``` would be of roughly the same length. Finding the change which introduces a regression is thus reduced to a binary search: repeatedly generate and test new 'midpoint’s until the commit chain is of length one. --bisect-vars This calculates the same as `--bisect`, except that refs in `refs/bisect/` are not used, and except that this outputs text ready to be eval’ed by the shell. These lines will assign the name of the midpoint revision to the variable `bisect_rev`, and the expected number of commits to be tested after `bisect_rev` is tested to `bisect_nr`, the expected number of commits to be tested if `bisect_rev` turns out to be good to `bisect_good`, the expected number of commits to be tested if `bisect_rev` turns out to be bad to `bisect_bad`, and the number of commits we are bisecting right now to `bisect_all`. --bisect-all This outputs all the commit objects between the included and excluded commits, ordered by their distance to the included and excluded commits. Refs in `refs/bisect/` are not used. The farthest from them is displayed first. (This is the only one displayed by `--bisect`.) This is useful because it makes it easy to choose a good commit to test when you want to avoid to test some of them for some reason (they may not compile for example). This option can be used along with `--bisect-vars`, in this case, after all the sorted commit objects, there will be the same text as if `--bisect-vars` had been used alone. ### Commit Ordering By default, the commits are shown in reverse chronological order. --date-order Show no parents before all of its children are shown, but otherwise show commits in the commit timestamp order. --author-date-order Show no parents before all of its children are shown, but otherwise show commits in the author timestamp order. --topo-order Show no parents before all of its children are shown, and avoid showing commits on multiple lines of history intermixed. For example, in a commit history like this: ``` ---1----2----4----7 \ \ 3----5----6----8--- ``` where the numbers denote the order of commit timestamps, `git rev-list` and friends with `--date-order` show the commits in the timestamp order: 8 7 6 5 4 3 2 1. With `--topo-order`, they would show 8 6 5 3 7 4 2 1 (or 8 7 4 2 6 5 3 1); some older commits are shown before newer ones in order to avoid showing the commits from two parallel development track mixed together. --reverse Output the commits chosen to be shown (see Commit Limiting section above) in reverse order. Cannot be combined with `--walk-reflogs`. ### Object Traversal These options are mostly targeted for packing of Git repositories. --objects Print the object IDs of any object referenced by the listed commits. `--objects foo ^bar` thus means “send me all object IDs which I need to download if I have the commit object `bar` but not `foo`”. --in-commit-order Print tree and blob ids in order of the commits. The tree and blob ids are printed after they are first referenced by a commit. --objects-edge Similar to `--objects`, but also print the IDs of excluded commits prefixed with a “-” character. This is used by [git-pack-objects[1]](git-pack-objects) to build a “thin” pack, which records objects in deltified form based on objects contained in these excluded commits to reduce network traffic. --objects-edge-aggressive Similar to `--objects-edge`, but it tries harder to find excluded commits at the cost of increased time. This is used instead of `--objects-edge` to build “thin” packs for shallow repositories. --indexed-objects Pretend as if all trees and blobs used by the index are listed on the command line. Note that you probably want to use `--objects`, too. --unpacked Only useful with `--objects`; print the object IDs that are not in packs. --object-names Only useful with `--objects`; print the names of the object IDs that are found. This is the default behavior. --no-object-names Only useful with `--objects`; does not print the names of the object IDs that are found. This inverts `--object-names`. This flag allows the output to be more easily parsed by commands such as [git-cat-file[1]](git-cat-file). --filter=<filter-spec> Only useful with one of the `--objects*`; omits objects (usually blobs) from the list of printed objects. The `<filter-spec>` may be one of the following: The form `--filter=blob:none` omits all blobs. The form `--filter=blob:limit=<n>[kmg]` omits blobs larger than n bytes or units. n may be zero. The suffixes k, m, and g can be used to name units in KiB, MiB, or GiB. For example, `blob:limit=1k` is the same as `blob:limit=1024`. The form `--filter=object:type=(tag|commit|tree|blob)` omits all objects which are not of the requested type. The form `--filter=sparse:oid=<blob-ish>` uses a sparse-checkout specification contained in the blob (or blob-expression) `<blob-ish>` to omit blobs that would not be required for a sparse checkout on the requested refs. The form `--filter=tree:<depth>` omits all blobs and trees whose depth from the root tree is >= <depth> (minimum depth if an object is located at multiple depths in the commits traversed). <depth>=0 will not include any trees or blobs unless included explicitly in the command-line (or standard input when --stdin is used). <depth>=1 will include only the tree and blobs which are referenced directly by a commit reachable from <commit> or an explicitly-given object. <depth>=2 is like <depth>=1 while also including trees and blobs one more level removed from an explicitly-given commit or tree. Note that the form `--filter=sparse:path=<path>` that wants to read from an arbitrary path on the filesystem has been dropped for security reasons. Multiple `--filter=` flags can be specified to combine filters. Only objects which are accepted by every filter are included. The form `--filter=combine:<filter1>+<filter2>+…​<filterN>` can also be used to combined several filters, but this is harder than just repeating the `--filter` flag and is usually not necessary. Filters are joined by `+` and individual filters are %-encoded (i.e. URL-encoded). Besides the `+` and `%` characters, the following characters are reserved and also must be encoded: `~!@#$^&*()[]{}\;",<>?``'`` as well as all characters with ASCII code <= `0x20`, which includes space and newline. Other arbitrary characters can also be encoded. For instance, `combine:tree:3+blob:none` and `combine:tree%3A3+blob%3Anone` are equivalent. --no-filter Turn off any previous `--filter=` argument. --filter-provided-objects Filter the list of explicitly provided objects, which would otherwise always be printed even if they did not match any of the filters. Only useful with `--filter=`. --filter-print-omitted Only useful with `--filter=`; prints a list of the objects omitted by the filter. Object IDs are prefixed with a “~” character. --missing=<missing-action> A debug option to help with future "partial clone" development. This option specifies how missing objects are handled. The form `--missing=error` requests that rev-list stop with an error if a missing object is encountered. This is the default action. The form `--missing=allow-any` will allow object traversal to continue if a missing object is encountered. Missing objects will silently be omitted from the results. The form `--missing=allow-promisor` is like `allow-any`, but will only allow object traversal to continue for EXPECTED promisor missing objects. Unexpected missing objects will raise an error. The form `--missing=print` is like `allow-any`, but will also print a list of the missing objects. Object IDs are prefixed with a “?” character. --exclude-promisor-objects (For internal use only.) Prefilter object traversal at promisor boundary. This is used with partial clone. This is stronger than `--missing=allow-promisor` because it limits the traversal, rather than just silencing errors about missing objects. --no-walk[=(sorted|unsorted)] Only show the given commits, but do not traverse their ancestors. This has no effect if a range is specified. If the argument `unsorted` is given, the commits are shown in the order they were given on the command line. Otherwise (if `sorted` or no argument was given), the commits are shown in reverse chronological order by commit time. Cannot be combined with `--graph`. --do-walk Overrides a previous `--no-walk`. ### Commit Formatting Using these options, [git-rev-list[1]](git-rev-list) will act similar to the more specialized family of commit log tools: [git-log[1]](git-log), [git-show[1]](git-show), and [git-whatchanged[1]](git-whatchanged) --pretty[=<format>] --format=<format> Pretty-print the contents of the commit logs in a given format, where `<format>` can be one of `oneline`, `short`, `medium`, `full`, `fuller`, `reference`, `email`, `raw`, `format:<string>` and `tformat:<string>`. When `<format>` is none of the above, and has `%placeholder` in it, it acts as if `--pretty=tformat:<format>` were given. See the "PRETTY FORMATS" section for some additional details for each format. When `=<format>` part is omitted, it defaults to `medium`. Note: you can specify the default pretty format in the repository configuration (see [git-config[1]](git-config)). --abbrev-commit Instead of showing the full 40-byte hexadecimal commit object name, show a prefix that names the object uniquely. "--abbrev=<n>" (which also modifies diff output, if it is displayed) option can be used to specify the minimum length of the prefix. This should make "--pretty=oneline" a whole lot more readable for people using 80-column terminals. --no-abbrev-commit Show the full 40-byte hexadecimal commit object name. This negates `--abbrev-commit`, either explicit or implied by other options such as "--oneline". It also overrides the `log.abbrevCommit` variable. --oneline This is a shorthand for "--pretty=oneline --abbrev-commit" used together. --encoding=<encoding> Commit objects record the character encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user. For non plumbing commands this defaults to UTF-8. Note that if an object claims to be encoded in `X` and we are outputting in `X`, we will output the object verbatim; this means that invalid sequences in the original commit may be copied to the output. Likewise, if iconv(3) fails to convert the commit, we will quietly output the original object verbatim. --expand-tabs=<n> --expand-tabs --no-expand-tabs Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of `<n>`) in the log message before showing it in the output. `--expand-tabs` is a short-hand for `--expand-tabs=8`, and `--no-expand-tabs` is a short-hand for `--expand-tabs=0`, which disables tab expansion. By default, tabs are expanded in pretty formats that indent the log message by 4 spaces (i.e. `medium`, which is the default, `full`, and `fuller`). --show-signature Check the validity of a signed commit object by passing the signature to `gpg --verify` and show the output. --relative-date Synonym for `--date=relative`. --date=<format> Only takes effect for dates shown in human-readable format, such as when using `--pretty`. `log.date` config variable sets a default value for the log command’s `--date` option. By default, dates are shown in the original time zone (either committer’s or author’s). If `-local` is appended to the format (e.g., `iso-local`), the user’s local time zone is used instead. `--date=relative` shows dates relative to the current time, e.g. “2 hours ago”. The `-local` option has no effect for `--date=relative`. `--date=local` is an alias for `--date=default-local`. `--date=iso` (or `--date=iso8601`) shows timestamps in a ISO 8601-like format. The differences to the strict ISO 8601 format are: * a space instead of the `T` date/time delimiter * a space between time and time zone * no colon between hours and minutes of the time zone `--date=iso-strict` (or `--date=iso8601-strict`) shows timestamps in strict ISO 8601 format. `--date=rfc` (or `--date=rfc2822`) shows timestamps in RFC 2822 format, often found in email messages. `--date=short` shows only the date, but not the time, in `YYYY-MM-DD` format. `--date=raw` shows the date as seconds since the epoch (1970-01-01 00:00:00 UTC), followed by a space, and then the timezone as an offset from UTC (a `+` or `-` with four digits; the first two are hours, and the second two are minutes). I.e., as if the timestamp were formatted with `strftime("%s %z")`). Note that the `-local` option does not affect the seconds-since-epoch value (which is always measured in UTC), but does switch the accompanying timezone value. `--date=human` shows the timezone if the timezone does not match the current time-zone, and doesn’t print the whole date if that matches (ie skip printing year for dates that are "this year", but also skip the whole date itself if it’s in the last few days and we can just say what weekday it was). For older dates the hour and minute is also omitted. `--date=unix` shows the date as a Unix epoch timestamp (seconds since 1970). As with `--raw`, this is always in UTC and therefore `-local` has no effect. `--date=format:...` feeds the format `...` to your system `strftime`, except for %s, %z, and %Z, which are handled internally. Use `--date=format:%c` to show the date in your system locale’s preferred format. See the `strftime` manual for a complete list of format placeholders. When using `-local`, the correct syntax is `--date=format-local:...`. `--date=default` is the default format, and is similar to `--date=rfc2822`, with a few exceptions: * there is no comma after the day-of-week * the time zone is omitted when the local time zone is used --header Print the contents of the commit in raw-format; each record is separated with a NUL character. --no-commit-header Suppress the header line containing "commit" and the object ID printed before the specified format. This has no effect on the built-in formats; only custom formats are affected. --commit-header Overrides a previous `--no-commit-header`. --parents Print also the parents of the commit (in the form "commit parent…​"). Also enables parent rewriting, see `History Simplification` above. --children Print also the children of the commit (in the form "commit child…​"). Also enables parent rewriting, see `History Simplification` above. --timestamp Print the raw commit timestamp. --left-right Mark which side of a symmetric difference a commit is reachable from. Commits from the left side are prefixed with `<` and those from the right with `>`. If combined with `--boundary`, those commits are prefixed with `-`. For example, if you have this topology: ``` y---b---b branch B / \ / / . / / \ o---x---a---a branch A ``` you would get an output like this: ``` $ git rev-list --left-right --boundary --pretty=oneline A...B >bbbbbbb... 3rd on b >bbbbbbb... 2nd on b <aaaaaaa... 3rd on a <aaaaaaa... 2nd on a -yyyyyyy... 1st on b -xxxxxxx... 1st on a ``` --graph Draw a text-based graphical representation of the commit history on the left hand side of the output. This may cause extra lines to be printed in between commits, in order for the graph history to be drawn properly. Cannot be combined with `--no-walk`. This enables parent rewriting, see `History Simplification` above. This implies the `--topo-order` option by default, but the `--date-order` option may also be specified. --show-linear-break[=<barrier>] When --graph is not used, all history branches are flattened which can make it hard to see that the two consecutive commits do not belong to a linear branch. This option puts a barrier in between them in that case. If `<barrier>` is specified, it is the string that will be shown instead of the default one. --count Print a number stating how many commits would have been listed, and suppress all other output. When used together with `--left-right`, instead print the counts for left and right commits, separated by a tab. When used together with `--cherry-mark`, omit patch equivalent commits from these counts and print the count for equivalent commits separated by a tab. Pretty formats -------------- If the commit is a merge, and if the pretty-format is not `oneline`, `email` or `raw`, an additional line is inserted before the `Author:` line. This line begins with "Merge: " and the hashes of ancestral commits are printed, separated by spaces. Note that the listed commits may not necessarily be the list of the **direct** parent commits if you have limited your view of history: for example, if you are only interested in changes related to a certain directory or file. There are several built-in formats, and you can define additional formats by setting a pretty.<name> config option to either another format name, or a `format:` string, as described below (see [git-config[1]](git-config)). Here are the details of the built-in formats: * `oneline` ``` <hash> <title-line> ``` This is designed to be as compact as possible. * `short` ``` commit <hash> Author: <author> ``` ``` <title-line> ``` * `medium` ``` commit <hash> Author: <author> Date: <author-date> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `full` ``` commit <hash> Author: <author> Commit: <committer> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `fuller` ``` commit <hash> Author: <author> AuthorDate: <author-date> Commit: <committer> CommitDate: <committer-date> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `reference` ``` <abbrev-hash> (<title-line>, <short-author-date>) ``` This format is used to refer to another commit in a commit message and is the same as `--pretty='format:%C(auto)%h (%s, %ad)'`. By default, the date is formatted with `--date=short` unless another `--date` option is explicitly specified. As with any `format:` with format placeholders, its output is not affected by other options like `--decorate` and `--walk-reflogs`. * `email` ``` From <hash> <date> From: <author> Date: <author-date> Subject: [PATCH] <title-line> ``` ``` <full-commit-message> ``` * `mboxrd` Like `email`, but lines in the commit message starting with "From " (preceded by zero or more ">") are quoted with ">" so they aren’t confused as starting a new commit. * `raw` The `raw` format shows the entire commit exactly as stored in the commit object. Notably, the hashes are displayed in full, regardless of whether --abbrev or --no-abbrev are used, and `parents` information show the true parent commits, without taking grafts or history simplification into account. Note that this format affects the way commits are displayed, but not the way the diff is shown e.g. with `git log --raw`. To get full object names in a raw diff format, use `--no-abbrev`. * `format:<format-string>` The `format:<format-string>` format allows you to specify which information you want to show. It works a little bit like printf format, with the notable exception that you get a newline with `%n` instead of `\n`. E.g, `format:"The author of %h was %an, %ar%nThe title was >>%s<<%n"` would show something like this: ``` The author of fe6e0ee was Junio C Hamano, 23 hours ago The title was >>t4119: test autocomputing -p<n> for traditional diff input.<< ``` The placeholders are: + Placeholders that expand to a single literal character: *%n* newline *%%* a raw `%` *%x00* print a byte from a hex code + Placeholders that affect formatting of later placeholders: *%Cred* switch color to red *%Cgreen* switch color to green *%Cblue* switch color to blue *%Creset* reset color *%C(…​)* color specification, as described under Values in the "CONFIGURATION FILE" section of [git-config[1]](git-config). By default, colors are shown only when enabled for log output (by `color.diff`, `color.ui`, or `--color`, and respecting the `auto` settings of the former if we are going to a terminal). `%C(auto,...)` is accepted as a historical synonym for the default (e.g., `%C(auto,red)`). Specifying `%C(always,...)` will show the colors even when color is not otherwise enabled (though consider just using `--color=always` to enable color for the whole output, including this format and anything else git might color). `auto` alone (i.e. `%C(auto)`) will turn on auto coloring on the next placeholders until the color is switched again. *%m* left (`<`), right (`>`) or boundary (`-`) mark *%w([<w>[,<i1>[,<i2>]]])* switch line wrapping, like the -w option of [git-shortlog[1]](git-shortlog). *%<(<N>[,trunc|ltrunc|mtrunc])* make the next placeholder take at least N columns, padding spaces on the right if necessary. Optionally truncate at the beginning (ltrunc), the middle (mtrunc) or the end (trunc) if the output is longer than N columns. Note that truncating only works correctly with N >= 2. *%<|(<N>)* make the next placeholder take at least until Nth columns, padding spaces on the right if necessary *%>(<N>)*, *%>|(<N>)* similar to `%<(<N>)`, `%<|(<N>)` respectively, but padding spaces on the left *%>>(<N>)*, *%>>|(<N>)* similar to `%>(<N>)`, `%>|(<N>)` respectively, except that if the next placeholder takes more spaces than given and there are spaces on its left, use those spaces *%><(<N>)*, *%><|(<N>)* similar to `%<(<N>)`, `%<|(<N>)` respectively, but padding both sides (i.e. the text is centered) + Placeholders that expand to information extracted from the commit: *%H* commit hash *%h* abbreviated commit hash *%T* tree hash *%t* abbreviated tree hash *%P* parent hashes *%p* abbreviated parent hashes *%an* author name *%aN* author name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ae* author email *%aE* author email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%al* author email local-part (the part before the `@` sign) *%aL* author local-part (see `%al`) respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ad* author date (format respects --date= option) *%aD* author date, RFC2822 style *%ar* author date, relative *%at* author date, UNIX timestamp *%ai* author date, ISO 8601-like format *%aI* author date, strict ISO 8601 format *%as* author date, short format (`YYYY-MM-DD`) *%ah* author date, human style (like the `--date=human` option of [git-rev-list[1]](git-rev-list)) *%cn* committer name *%cN* committer name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ce* committer email *%cE* committer email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%cl* committer email local-part (the part before the `@` sign) *%cL* committer local-part (see `%cl`) respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%cd* committer date (format respects --date= option) *%cD* committer date, RFC2822 style *%cr* committer date, relative *%ct* committer date, UNIX timestamp *%ci* committer date, ISO 8601-like format *%cI* committer date, strict ISO 8601 format *%cs* committer date, short format (`YYYY-MM-DD`) *%ch* committer date, human style (like the `--date=human` option of [git-rev-list[1]](git-rev-list)) *%d* ref names, like the --decorate option of [git-log[1]](git-log) *%D* ref names without the " (", ")" wrapping. *%(describe[:options])* human-readable name, like [git-describe[1]](git-describe); empty string for undescribable commits. The `describe` string may be followed by a colon and zero or more comma-separated options. Descriptions can be inconsistent when tags are added or removed at the same time. - `tags[=<bool-value>]`: Instead of only considering annotated tags, consider lightweight tags as well. - `abbrev=<number>`: Instead of using the default number of hexadecimal digits (which will vary according to the number of objects in the repository with a default of 7) of the abbreviated object name, use <number> digits, or as many digits as needed to form a unique object name. - `match=<pattern>`: Only consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. - `exclude=<pattern>`: Do not consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. *%S* ref name given on the command line by which the commit was reached (like `git log --source`), only works with `git log` *%e* encoding *%s* subject *%f* sanitized subject line, suitable for a filename *%b* body *%B* raw body (unwrapped subject and body) *%GG* raw verification message from GPG for a signed commit *%G?* show "G" for a good (valid) signature, "B" for a bad signature, "U" for a good signature with unknown validity, "X" for a good signature that has expired, "Y" for a good signature made by an expired key, "R" for a good signature made by a revoked key, "E" if the signature cannot be checked (e.g. missing key) and "N" for no signature *%GS* show the name of the signer for a signed commit *%GK* show the key used to sign a signed commit *%GF* show the fingerprint of the key used to sign a signed commit *%GP* show the fingerprint of the primary key whose subkey was used to sign a signed commit *%GT* show the trust level for the key used to sign a signed commit *%gD* reflog selector, e.g., `refs/stash@{1}` or `refs/stash@{2 minutes ago}`; the format follows the rules described for the `-g` option. The portion before the `@` is the refname as given on the command line (so `git log -g refs/heads/master` would yield `refs/heads/master@{0}`). *%gd* shortened reflog selector; same as `%gD`, but the refname portion is shortened for human readability (so `refs/heads/master` becomes just `master`). *%gn* reflog identity name *%gN* reflog identity name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ge* reflog identity email *%gE* reflog identity email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%gs* reflog subject *%(trailers[:options])* display the trailers of the body as interpreted by [git-interpret-trailers[1]](git-interpret-trailers). The `trailers` string may be followed by a colon and zero or more comma-separated options. If any option is provided multiple times the last occurrence wins. - `key=<key>`: only show trailers with specified <key>. Matching is done case-insensitively and trailing colon is optional. If option is given multiple times trailer lines matching any of the keys are shown. This option automatically enables the `only` option so that non-trailer lines in the trailer block are hidden. If that is not desired it can be disabled with `only=false`. E.g., `%(trailers:key=Reviewed-by)` shows trailer lines with key `Reviewed-by`. - `only[=<bool>]`: select whether non-trailer lines from the trailer block should be included. - `separator=<sep>`: specify a separator inserted between trailer lines. When this option is not given each trailer line is terminated with a line feed character. The string <sep> may contain the literal formatting codes described above. To use comma as separator one must use `%x2C` as it would otherwise be parsed as next option. E.g., `%(trailers:key=Ticket,separator=%x2C )` shows all trailer lines whose key is "Ticket" separated by a comma and a space. - `unfold[=<bool>]`: make it behave as if interpret-trailer’s `--unfold` option was given. E.g., `%(trailers:only,unfold=true)` unfolds and shows all trailer lines. - `keyonly[=<bool>]`: only show the key part of the trailer. - `valueonly[=<bool>]`: only show the value part of the trailer. - `key_value_separator=<sep>`: specify a separator inserted between trailer lines. When this option is not given each trailer key-value pair is separated by ": ". Otherwise it shares the same semantics as `separator=<sep>` above. | | | | --- | --- | | Note | Some placeholders may depend on other options given to the revision traversal engine. For example, the `%g*` reflog options will insert an empty string unless we are traversing reflog entries (e.g., by `git log -g`). The `%d` and `%D` placeholders will use the "short" decoration format if `--decorate` was not already provided on the command line. | The boolean options accept an optional value `[=<bool-value>]`. The values `true`, `false`, `on`, `off` etc. are all accepted. See the "boolean" sub-section in "EXAMPLES" in [git-config[1]](git-config). If a boolean option is given with no value, it’s enabled. If you add a `+` (plus sign) after `%` of a placeholder, a line-feed is inserted immediately before the expansion if and only if the placeholder expands to a non-empty string. If you add a `-` (minus sign) after `%` of a placeholder, all consecutive line-feeds immediately preceding the expansion are deleted if and only if the placeholder expands to an empty string. If you add a ` ` (space) after `%` of a placeholder, a space is inserted immediately before the expansion if and only if the placeholder expands to a non-empty string. * `tformat:` The `tformat:` format works exactly like `format:`, except that it provides "terminator" semantics instead of "separator" semantics. In other words, each commit has the message terminator character (usually a newline) appended, rather than a separator placed between entries. This means that the final entry of a single-line format will be properly terminated with a new line, just as the "oneline" format does. For example: ``` $ git log -2 --pretty=format:%h 4da45bef \ | perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/' 4da45be 7134973 -- NO NEWLINE $ git log -2 --pretty=tformat:%h 4da45bef \ | perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/' 4da45be 7134973 ``` In addition, any unrecognized string that has a `%` in it is interpreted as if it has `tformat:` in front of it. For example, these two are equivalent: ``` $ git log -2 --pretty=tformat:%h 4da45bef $ git log -2 --pretty=%h 4da45bef ``` Examples -------- * Print the list of commits reachable from the current branch. ``` git rev-list HEAD ``` * Print the list of commits on this branch, but not present in the upstream branch. ``` git rev-list @{upstream}..HEAD ``` * Format commits with their author and commit message (see also the porcelain [git-log[1]](git-log)). ``` git rev-list --format=medium HEAD ``` * Format commits along with their diffs (see also the porcelain [git-log[1]](git-log), which can do this in a single process). ``` git rev-list HEAD | git diff-tree --stdin --format=medium -p ``` * Print the list of commits on the current branch that touched any file in the `Documentation` directory. ``` git rev-list HEAD -- Documentation/ ``` * Print the list of commits authored by you in the past year, on any branch, tag, or other ref. ``` git rev-list [email protected] --since=1.year.ago --all ``` * Print the list of objects reachable from the current branch (i.e., all commits and the blobs and trees they contain). ``` git rev-list --objects HEAD ``` * Compare the disk size of all reachable objects, versus those reachable from reflogs, versus the total packed size. This can tell you whether running `git repack -ad` might reduce the repository size (by dropping unreachable objects), and whether expiring reflogs might help. ``` # reachable objects git rev-list --disk-usage --objects --all # plus reflogs git rev-list --disk-usage --objects --all --reflog # total disk size used du -c .git/objects/pack/*.pack .git/objects/??/* # alternative to du: add up "size" and "size-pack" fields git count-objects -v ``` * Report the disk size of each branch, not including objects used by the current branch. This can find outliers that are contributing to a bloated repository size (e.g., because somebody accidentally committed large build artifacts). ``` git for-each-ref --format='%(refname)' | while read branch do size=$(git rev-list --disk-usage --objects HEAD..$branch) echo "$size $branch" done | sort -n ``` * Compare the on-disk size of branches in one group of refs, excluding another. If you co-mingle objects from multiple remotes in a single repository, this can show which remotes are contributing to the repository size (taking the size of `origin` as a baseline). ``` git rev-list --disk-usage --objects --remotes=$suspect --not --remotes=origin ```
programming_docs
git git-credential git-credential ============== Name ---- git-credential - Retrieve and store user credentials Synopsis -------- ``` 'git credential' (fill|approve|reject) ``` Description ----------- Git has an internal interface for storing and retrieving credentials from system-specific helpers, as well as prompting the user for usernames and passwords. The git-credential command exposes this interface to scripts which may want to retrieve, store, or prompt for credentials in the same manner as Git. The design of this scriptable interface models the internal C API; see credential.h for more background on the concepts. git-credential takes an "action" option on the command-line (one of `fill`, `approve`, or `reject`) and reads a credential description on stdin (see [INPUT/OUTPUT FORMAT](#IOFMT)). If the action is `fill`, git-credential will attempt to add "username" and "password" attributes to the description by reading config files, by contacting any configured credential helpers, or by prompting the user. The username and password attributes of the credential description are then printed to stdout together with the attributes already provided. If the action is `approve`, git-credential will send the description to any configured credential helpers, which may store the credential for later use. If the action is `reject`, git-credential will send the description to any configured credential helpers, which may erase any stored credential matching the description. If the action is `approve` or `reject`, no output should be emitted. Typical use of git credential ----------------------------- An application using git-credential will typically use `git credential` following these steps: 1. Generate a credential description based on the context. For example, if we want a password for `https://example.com/foo.git`, we might generate the following credential description (don’t forget the blank line at the end; it tells `git credential` that the application finished feeding all the information it has): ``` protocol=https host=example.com path=foo.git ``` 2. Ask git-credential to give us a username and password for this description. This is done by running `git credential fill`, feeding the description from step (1) to its standard input. The complete credential description (including the credential per se, i.e. the login and password) will be produced on standard output, like: ``` protocol=https host=example.com username=bob password=secr3t ``` In most cases, this means the attributes given in the input will be repeated in the output, but Git may also modify the credential description, for example by removing the `path` attribute when the protocol is HTTP(s) and `credential.useHttpPath` is false. If the `git credential` knew about the password, this step may not have involved the user actually typing this password (the user may have typed a password to unlock the keychain instead, or no user interaction was done if the keychain was already unlocked) before it returned `password=secr3t`. 3. Use the credential (e.g., access the URL with the username and password from step (2)), and see if it’s accepted. 4. Report on the success or failure of the password. If the credential allowed the operation to complete successfully, then it can be marked with an "approve" action to tell `git credential` to reuse it in its next invocation. If the credential was rejected during the operation, use the "reject" action so that `git credential` will ask for a new password in its next invocation. In either case, `git credential` should be fed with the credential description obtained from step (2) (which also contain the ones provided in step (1)). Input/output format ------------------- `git credential` reads and/or writes (depending on the action used) credential information in its standard input/output. This information can correspond either to keys for which `git credential` will obtain the login information (e.g. host, protocol, path), or to the actual credential data to be obtained (username/password). The credential is split into a set of named attributes, with one attribute per line. Each attribute is specified by a key-value pair, separated by an `=` (equals) sign, followed by a newline. The key may contain any bytes except `=`, newline, or NUL. The value may contain any bytes except newline or NUL. In both cases, all bytes are treated as-is (i.e., there is no quoting, and one cannot transmit a value with newline or NUL in it). The list of attributes is terminated by a blank line or end-of-file. Git understands the following attributes: `protocol` The protocol over which the credential will be used (e.g., `https`). `host` The remote hostname for a network credential. This includes the port number if one was specified (e.g., "example.com:8088"). `path` The path with which the credential will be used. E.g., for accessing a remote https repository, this will be the repository’s path on the server. `username` The credential’s username, if we already have one (e.g., from a URL, the configuration, the user, or from a previously run helper). `password` The credential’s password, if we are asking it to be stored. `url` When this special attribute is read by `git credential`, the value is parsed as a URL and treated as if its constituent parts were read (e.g., `url=https://example.com` would behave as if `protocol=https` and `host=example.com` had been provided). This can help callers avoid parsing URLs themselves. Note that specifying a protocol is mandatory and if the URL doesn’t specify a hostname (e.g., "cert:///path/to/file") the credential will contain a hostname attribute whose value is an empty string. Components which are missing from the URL (e.g., there is no username in the example above) will be left unset. Unrecognised attributes are silently discarded. git git-blame git-blame ========= Name ---- git-blame - Show what revision and author last modified each line of a file Synopsis -------- ``` git blame [-c] [-b] [-l] [--root] [-t] [-f] [-n] [-s] [-e] [-p] [-w] [--incremental] [-L <range>] [-S <revs-file>] [-M] [-C] [-C] [-C] [--since=<date>] [--ignore-rev <rev>] [--ignore-revs-file <file>] [--color-lines] [--color-by-age] [--progress] [--abbrev=<n>] [<rev> | --contents <file> | --reverse <rev>..<rev>] [--] <file> ``` Description ----------- Annotates each line in the given file with information from the revision which last modified the line. Optionally, start annotating from the given revision. When specified one or more times, `-L` restricts annotation to the requested lines. The origin of lines is automatically followed across whole-file renames (currently there is no option to turn the rename-following off). To follow lines moved from one file to another, or to follow lines that were copied and pasted from another file, etc., see the `-C` and `-M` options. The report does not tell you anything about lines which have been deleted or replaced; you need to use a tool such as `git diff` or the "pickaxe" interface briefly mentioned in the following paragraph. Apart from supporting file annotation, Git also supports searching the development history for when a code snippet occurred in a change. This makes it possible to track when a code snippet was added to a file, moved or copied between files, and eventually deleted or replaced. It works by searching for a text string in the diff. A small example of the pickaxe interface that searches for `blame_usage`: ``` $ git log --pretty=oneline -S'blame_usage' 5040f17eba15504bad66b14a645bddd9b015ebb7 blame -S <ancestry-file> ea4c7f9bf69e781dd0cd88d2bccb2bf5cc15c9a7 git-blame: Make the output ``` Options ------- -b Show blank SHA-1 for boundary commits. This can also be controlled via the `blame.blankBoundary` config option. --root Do not treat root commits as boundaries. This can also be controlled via the `blame.showRoot` config option. --show-stats Include additional statistics at the end of blame output. -L <start>,<end> -L :<funcname> Annotate only the line range given by `<start>,<end>`, or by the function name regex `<funcname>`. May be specified multiple times. Overlapping ranges are allowed. `<start>` and `<end>` are optional. `-L <start>` or `-L <start>,` spans from `<start>` to end of file. `-L ,<end>` spans from start of file to `<end>`. `<start>` and `<end>` can take one of these forms: * number If `<start>` or `<end>` is a number, it specifies an absolute line number (lines count from 1). * `/regex/` This form will use the first line matching the given POSIX regex. If `<start>` is a regex, it will search from the end of the previous `-L` range, if any, otherwise from the start of file. If `<start>` is `^/regex/`, it will search from the start of file. If `<end>` is a regex, it will search starting at the line given by `<start>`. * +offset or -offset This is only valid for `<end>` and will specify a number of lines before or after the line given by `<start>`. If `:<funcname>` is given in place of `<start>` and `<end>`, it is a regular expression that denotes the range from the first funcname line that matches `<funcname>`, up to the next funcname line. `:<funcname>` searches from the end of the previous `-L` range, if any, otherwise from the start of file. `^:<funcname>` searches from the start of file. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). -l Show long rev (Default: off). -t Show raw timestamp (Default: off). -S <revs-file> Use revisions from revs-file instead of calling [git-rev-list[1]](git-rev-list). --reverse <rev>..<rev> Walk history forward instead of backward. Instead of showing the revision in which a line appeared, this shows the last revision in which a line has existed. This requires a range of revision like START..END where the path to blame exists in START. `git blame --reverse START` is taken as `git blame --reverse START..HEAD` for convenience. --first-parent Follow only the first parent commit upon seeing a merge commit. This option can be used to determine when a line was introduced to a particular integration branch, rather than when it was introduced to the history overall. -p --porcelain Show in a format designed for machine consumption. --line-porcelain Show the porcelain format, but output commit information for each line, not just the first time a commit is referenced. Implies --porcelain. --incremental Show the result incrementally in a format designed for machine consumption. --encoding=<encoding> Specifies the encoding used to output author names and commit summaries. Setting it to `none` makes blame output unconverted data. For more information see the discussion about encoding in the [git-log[1]](git-log) manual page. --contents <file> When <rev> is not specified, the command annotates the changes starting backwards from the working tree copy. This flag makes the command pretend as if the working tree copy has the contents of the named file (specify `-` to make the command read from the standard input). --date <format> Specifies the format used to output dates. If --date is not provided, the value of the blame.date config variable is used. If the blame.date config variable is also not set, the iso format is used. For supported values, see the discussion of the --date option at [git-log[1]](git-log). --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal. This flag enables progress reporting even if not attached to a terminal. Can’t use `--progress` together with `--porcelain` or `--incremental`. -M[<num>] Detect moved or copied lines within a file. When a commit moves or copies a block of lines (e.g. the original file has A and then B, and the commit changes it to B and then A), the traditional `blame` algorithm notices only half of the movement and typically blames the lines that were moved up (i.e. B) to the parent and assigns blame to the lines that were moved down (i.e. A) to the child commit. With this option, both groups of lines are blamed on the parent by running extra passes of inspection. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying within a file for it to associate those lines with the parent commit. The default value is 20. -C[<num>] In addition to `-M`, detect lines moved or copied from other files that were modified in the same commit. This is useful when you reorganize your program and move code around across files. When this option is given twice, the command additionally looks for copies from other files in the commit that creates the file. When this option is given three times, the command additionally looks for copies from other files in any commit. <num> is optional but it is the lower bound on the number of alphanumeric characters that Git must detect as moving/copying between files for it to associate those lines with the parent commit. And the default value is 40. If there are more than one `-C` options given, the <num> argument of the last `-C` will take effect. --ignore-rev <rev> Ignore changes made by the revision when assigning blame, as if the change never happened. Lines that were changed or added by an ignored commit will be blamed on the previous commit that changed that line or nearby lines. This option may be specified multiple times to ignore more than one revision. If the `blame.markIgnoredLines` config option is set, then lines that were changed by an ignored commit and attributed to another commit will be marked with a `?` in the blame output. If the `blame.markUnblamableLines` config option is set, then those lines touched by an ignored commit that we could not attribute to another revision are marked with a `*`. --ignore-revs-file <file> Ignore revisions listed in `file`, which must be in the same format as an `fsck.skipList`. This option may be repeated, and these files will be processed after any files specified with the `blame.ignoreRevsFile` config option. An empty file name, `""`, will clear the list of revs from previously processed files. --color-lines Color line annotations in the default format differently if they come from the same commit as the preceding line. This makes it easier to distinguish code blocks introduced by different commits. The color defaults to cyan and can be adjusted using the `color.blame.repeatedLines` config option. --color-by-age Color line annotations depending on the age of the line in the default format. The `color.blame.highlightRecent` config option controls what color is used for each range of age. -h Show help message. -c Use the same output mode as [git-annotate[1]](git-annotate) (Default: off). --score-debug Include debugging information related to the movement of lines between files (see `-C`) and lines moved within a file (see `-M`). The first number listed is the score. This is the number of alphanumeric characters detected as having been moved between or within files. This must be above a certain threshold for `git blame` to consider those lines of code to have been moved. -f --show-name Show the filename in the original commit. By default the filename is shown if there is any line that came from a file with a different name, due to rename detection. -n --show-number Show the line number in the original commit (Default: off). -s Suppress the author name and timestamp from the output. -e --show-email Show the author email instead of author name (Default: off). This can also be controlled via the `blame.showEmail` config option. -w Ignore whitespace when comparing the parent’s version and the child’s to find where the lines came from. --abbrev=<n> Instead of using the default 7+1 hexadecimal digits as the abbreviated object name, use <m>+1 digits, where <m> is at least <n> but ensures the commit object names are unique. Note that 1 column is used for a caret to mark the boundary commit. The default format ------------------ When neither `--porcelain` nor `--incremental` option is specified, `git blame` will output annotation for each line with: * abbreviated object name for the commit the line came from; * author ident (by default author name and date, unless `-s` or `-e` is specified); and * line number before the line contents. The porcelain format -------------------- In this format, each line is output after a header; the header at the minimum has the first line which has: * 40-byte SHA-1 of the commit the line is attributed to; * the line number of the line in the original file; * the line number of the line in the final file; * on a line that starts a group of lines from a different commit than the previous one, the number of lines in this group. On subsequent lines this field is absent. This header line is followed by the following information at least once for each commit: * the author name ("author"), email ("author-mail"), time ("author-time"), and time zone ("author-tz"); similarly for committer. * the filename in the commit that the line is attributed to. * the first line of the commit log message ("summary"). The contents of the actual line is output after the above header, prefixed by a TAB. This is to allow adding more header elements later. The porcelain format generally suppresses commit information that has already been seen. For example, two lines that are blamed to the same commit will both be shown, but the details for that commit will be shown only once. This is more efficient, but may require more state be kept by the reader. The `--line-porcelain` option can be used to output full commit information for each line, allowing simpler (but less efficient) usage like: ``` # count the number of lines attributed to each author git blame --line-porcelain file | sed -n 's/^author //p' | sort | uniq -c | sort -rn ``` Specifying ranges ----------------- Unlike `git blame` and `git annotate` in older versions of git, the extent of the annotation can be limited to both line ranges and revision ranges. The `-L` option, which limits annotation to a range of lines, may be specified multiple times. When you are interested in finding the origin for lines 40-60 for file `foo`, you can use the `-L` option like so (they mean the same thing — both ask for 21 lines starting at line 40): ``` git blame -L 40,60 foo git blame -L 40,+21 foo ``` Also you can use a regular expression to specify the line range: ``` git blame -L '/^sub hello {/,/^}$/' foo ``` which limits the annotation to the body of the `hello` subroutine. When you are not interested in changes older than version v2.6.18, or changes older than 3 weeks, you can use revision range specifiers similar to `git rev-list`: ``` git blame v2.6.18.. -- foo git blame --since=3.weeks -- foo ``` When revision range specifiers are used to limit the annotation, lines that have not changed since the range boundary (either the commit v2.6.18 or the most recent commit that is more than 3 weeks old in the above example) are blamed for that range boundary commit. A particularly useful way is to see if an added file has lines created by copy-and-paste from existing files. Sometimes this indicates that the developer was being sloppy and did not refactor the code properly. You can first find the commit that introduced the file with: ``` git log --diff-filter=A --pretty=short -- foo ``` and then annotate the change between the commit and its parents, using `commit^!` notation: ``` git blame -C -C -f $commit^! -- foo ``` Incremental output ------------------ When called with `--incremental` option, the command outputs the result as it is built. The output generally will talk about lines touched by more recent commits first (i.e. the lines will be annotated out of order) and is meant to be used by interactive viewers. The output format is similar to the Porcelain format, but it does not contain the actual lines from the file that is being annotated. 1. Each blame entry always starts with a line of: ``` <40-byte hex sha1> <sourceline> <resultline> <num_lines> ``` Line numbers count from 1. 2. The first time that a commit shows up in the stream, it has various other information about it printed out with a one-word tag at the beginning of each line describing the extra commit information (author, email, committer, dates, summary, etc.). 3. Unlike the Porcelain format, the filename information is always given and terminates the entry: ``` "filename" <whitespace-quoted-filename-goes-here> ``` and thus it is really quite easy to parse for some line- and word-oriented parser (which should be quite natural for most scripting languages). | | | | --- | --- | | Note | For people who do parsing: to make it more robust, just ignore any lines between the first and last one ("<sha1>" and "filename" lines) where you do not recognize the tag words (or care about that particular one) at the beginning of the "extended information" lines. That way, if there is ever added information (like the commit encoding or extended commit commentary), a blame viewer will not care. | Mapping authors --------------- See [gitmailmap[5]](gitmailmap). Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: blame.blankBoundary Show blank commit object name for boundary commits in [git-blame[1]](git-blame). This option defaults to false. blame.coloring This determines the coloring scheme to be applied to blame output. It can be `repeatedLines`, `highlightRecent`, or `none` which is the default. blame.date Specifies the format used to output dates in [git-blame[1]](git-blame). If unset the iso format is used. For supported values, see the discussion of the `--date` option at [git-log[1]](git-log). blame.showEmail Show the author email instead of author name in [git-blame[1]](git-blame). This option defaults to false. blame.showRoot Do not treat root commits as boundaries in [git-blame[1]](git-blame). This option defaults to false. blame.ignoreRevsFile Ignore revisions listed in the file, one unabbreviated object name per line, in [git-blame[1]](git-blame). Whitespace and comments beginning with `#` are ignored. This option may be repeated multiple times. Empty file names will reset the list of ignored revisions. This option will be handled before the command line option `--ignore-revs-file`. blame.markUnblamableLines Mark lines that were changed by an ignored revision that we could not attribute to another commit with a `*` in the output of [git-blame[1]](git-blame). blame.markIgnoredLines Mark lines that were changed by an ignored revision that we attributed to another commit with a `?` in the output of [git-blame[1]](git-blame). See also -------- [git-annotate[1]](git-annotate)
programming_docs
git git-merge-one-file git-merge-one-file ================== Name ---- git-merge-one-file - The standard helper program to use with git-merge-index Synopsis -------- ``` git merge-one-file ``` Description ----------- This is the standard helper program to use with `git merge-index` to resolve a merge after the trivial merge done with `git read-tree -m`. git git-mailsplit git-mailsplit ============= Name ---- git-mailsplit - Simple UNIX mbox splitter program Synopsis -------- ``` git mailsplit [-b] [-f<nn>] [-d<prec>] [--keep-cr] [--mboxrd] -o<directory> [--] [(<mbox>|<Maildir>)…​] ``` Description ----------- Splits a mbox file or a Maildir into a list of files: "0001" "0002" .. in the specified directory so you can process them further from there. | | | | --- | --- | | Important | Maildir splitting relies upon filenames being sorted to output patches in the correct order. | Options ------- <mbox> Mbox file to split. If not given, the mbox is read from the standard input. <Maildir> Root of the Maildir to split. This directory should contain the cur, tmp and new subdirectories. -o<directory> Directory in which to place the individual messages. -b If any file doesn’t begin with a From line, assume it is a single mail message instead of signaling error. -d<prec> Instead of the default 4 digits with leading zeros, different precision can be specified for the generated filenames. -f<nn> Skip the first <nn> numbers, for example if -f3 is specified, start the numbering with 0004. --keep-cr Do not remove `\r` from lines ending with `\r\n`. --mboxrd Input is of the "mboxrd" format and "^>+From " line escaping is reversed. git git-reflog git-reflog ========== Name ---- git-reflog - Manage reflog information Synopsis -------- ``` git reflog [show] [<log-options>] [<ref>] git reflog expire [--expire=<time>] [--expire-unreachable=<time>] [--rewrite] [--updateref] [--stale-fix] [--dry-run | -n] [--verbose] [--all [--single-worktree] | <refs>…​] git reflog delete [--rewrite] [--updateref] [--dry-run | -n] [--verbose] <ref>@{<specifier>}…​ git reflog exists <ref> ``` Description ----------- This command manages the information recorded in the reflogs. Reference logs, or "reflogs", record when the tips of branches and other references were updated in the local repository. Reflogs are useful in various Git commands, to specify the old value of a reference. For example, `HEAD@{2}` means "where HEAD used to be two moves ago", `master@{one.week.ago}` means "where master used to point to one week ago in this local repository", and so on. See [gitrevisions[7]](gitrevisions) for more details. The command takes various subcommands, and different options depending on the subcommand: The "show" subcommand (which is also the default, in the absence of any subcommands) shows the log of the reference provided in the command-line (or `HEAD`, by default). The reflog covers all recent actions, and in addition the `HEAD` reflog records branch switching. `git reflog show` is an alias for `git log -g --abbrev-commit --pretty=oneline`; see [git-log[1]](git-log) for more information. The "expire" subcommand prunes older reflog entries. Entries older than `expire` time, or entries older than `expire-unreachable` time and not reachable from the current tip, are removed from the reflog. This is typically not used directly by end users — instead, see [git-gc[1]](git-gc). The "delete" subcommand deletes single entries from the reflog. Its argument must be an `exact` entry (e.g. "`git reflog delete master@{2}`"). This subcommand is also typically not used directly by end users. The "exists" subcommand checks whether a ref has a reflog. It exits with zero status if the reflog exists, and non-zero status if it does not. Options ------- ### Options for `show` `git reflog show` accepts any of the options accepted by `git log`. ### Options for `expire` --all Process the reflogs of all references. --single-worktree By default when `--all` is specified, reflogs from all working trees are processed. This option limits the processing to reflogs from the current working tree only. --expire=<time> Prune entries older than the specified time. If this option is not specified, the expiration time is taken from the configuration setting `gc.reflogExpire`, which in turn defaults to 90 days. `--expire=all` prunes entries regardless of their age; `--expire=never` turns off pruning of reachable entries (but see `--expire-unreachable`). --expire-unreachable=<time> Prune entries older than `<time>` that are not reachable from the current tip of the branch. If this option is not specified, the expiration time is taken from the configuration setting `gc.reflogExpireUnreachable`, which in turn defaults to 30 days. `--expire-unreachable=all` prunes unreachable entries regardless of their age; `--expire-unreachable=never` turns off early pruning of unreachable entries (but see `--expire`). --updateref Update the reference to the value of the top reflog entry (i.e. <ref>@{0}) if the previous top entry was pruned. (This option is ignored for symbolic references.) --rewrite If a reflog entry’s predecessor is pruned, adjust its "old" SHA-1 to be equal to the "new" SHA-1 field of the entry that now precedes it. --stale-fix Prune any reflog entries that point to "broken commits". A broken commit is a commit that is not reachable from any of the reference tips and that refers, directly or indirectly, to a missing commit, tree, or blob object. This computation involves traversing all the reachable objects, i.e. it has the same cost as `git prune`. It is primarily intended to fix corruption caused by garbage collecting using older versions of Git, which didn’t protect objects referred to by reflogs. -n --dry-run Do not actually prune any entries; just show what would have been pruned. --verbose Print extra information on screen. ### Options for `delete` `git reflog delete` accepts options `--updateref`, `--rewrite`, `-n`, `--dry-run`, and `--verbose`, with the same meanings as when they are used with `expire`. git git-verify-pack git-verify-pack =============== Name ---- git-verify-pack - Validate packed Git archive files Synopsis -------- ``` git verify-pack [-v | --verbose] [-s | --stat-only] [--] <pack>.idx…​ ``` Description ----------- Reads given idx file for packed Git archive created with the `git pack-objects` command and verifies idx file and the corresponding pack file. Options ------- <pack>.idx …​ The idx files to verify. -v --verbose After verifying the pack, show list of objects contained in the pack and a histogram of delta chain length. -s --stat-only Do not verify the pack contents; only show the histogram of delta chain length. With `--verbose`, list of objects is also shown. -- Do not interpret any more arguments as options. Output format ------------- When specifying the -v option the format used is: ``` SHA-1 type size size-in-packfile offset-in-packfile ``` for objects that are not deltified in the pack, and ``` SHA-1 type size size-in-packfile offset-in-packfile depth base-SHA-1 ``` for objects that are deltified. git git-cat-file git-cat-file ============ Name ---- git-cat-file - Provide content or type and size information for repository objects Synopsis -------- ``` git cat-file <type> <object> git cat-file (-e | -p) <object> git cat-file (-t | -s) [--allow-unknown-type] <object> git cat-file (--batch | --batch-check | --batch-command) [--batch-all-objects] [--buffer] [--follow-symlinks] [--unordered] [--textconv | --filters] [-z] git cat-file (--textconv | --filters) [<rev>:<path|tree-ish> | --path=<path|tree-ish> <rev>] ``` Description ----------- In its first form, the command provides the content or the type of an object in the repository. The type is required unless `-t` or `-p` is used to find the object type, or `-s` is used to find the object size, or `--textconv` or `--filters` is used (which imply type "blob"). In the second form, a list of objects (separated by linefeeds) is provided on stdin, and the SHA-1, type, and size of each object is printed on stdout. The output format can be overridden using the optional `<format>` argument. If either `--textconv` or `--filters` was specified, the input is expected to list the object names followed by the path name, separated by a single whitespace, so that the appropriate drivers can be determined. Options ------- <object> The name of the object to show. For a more complete list of ways to spell object names, see the "SPECIFYING REVISIONS" section in [gitrevisions[7]](gitrevisions). -t Instead of the content, show the object type identified by `<object>`. -s Instead of the content, show the object size identified by `<object>`. -e Exit with zero status if `<object>` exists and is a valid object. If `<object>` is of an invalid format exit with non-zero and emits an error on stderr. -p Pretty-print the contents of `<object>` based on its type. <type> Typically this matches the real type of `<object>` but asking for a type that can trivially be dereferenced from the given `<object>` is also permitted. An example is to ask for a "tree" with `<object>` being a commit object that contains it, or to ask for a "blob" with `<object>` being a tag object that points at it. --[no-]mailmap --[no-]use-mailmap Use mailmap file to map author, committer and tagger names and email addresses to canonical real names and email addresses. See [git-shortlog[1]](git-shortlog). --textconv Show the content as transformed by a textconv filter. In this case, `<object>` has to be of the form `<tree-ish>:<path>`, or `:<path>` in order to apply the filter to the content recorded in the index at `<path>`. --filters Show the content as converted by the filters configured in the current working tree for the given `<path>` (i.e. smudge filters, end-of-line conversion, etc). In this case, `<object>` has to be of the form `<tree-ish>:<path>`, or `:<path>`. --path=<path> For use with `--textconv` or `--filters`, to allow specifying an object name and a path separately, e.g. when it is difficult to figure out the revision from which the blob came. --batch --batch=<format> Print object information and contents for each object provided on stdin. May not be combined with any other options or arguments except `--textconv` or `--filters`, in which case the input lines also need to specify the path, separated by whitespace. See the section `BATCH OUTPUT` below for details. --batch-check --batch-check=<format> Print object information for each object provided on stdin. May not be combined with any other options or arguments except `--textconv` or `--filters`, in which case the input lines also need to specify the path, separated by whitespace. See the section `BATCH OUTPUT` below for details. --batch-command --batch-command=<format> Enter a command mode that reads commands and arguments from stdin. May only be combined with `--buffer`, `--textconv` or `--filters`. In the case of `--textconv` or `--filters`, the input lines also need to specify the path, separated by whitespace. See the section `BATCH OUTPUT` below for details. `--batch-command` recognizes the following commands: contents <object> Print object contents for object reference `<object>`. This corresponds to the output of `--batch`. info <object> Print object info for object reference `<object>`. This corresponds to the output of `--batch-check`. flush Used with `--buffer` to execute all preceding commands that were issued since the beginning or since the last flush was issued. When `--buffer` is used, no output will come until a `flush` is issued. When `--buffer` is not used, commands are flushed each time without issuing `flush`. --batch-all-objects Instead of reading a list of objects on stdin, perform the requested batch operation on all objects in the repository and any alternate object stores (not just reachable objects). Requires `--batch` or `--batch-check` be specified. By default, the objects are visited in order sorted by their hashes; see also `--unordered` below. Objects are presented as-is, without respecting the "replace" mechanism of [git-replace[1]](git-replace). --buffer Normally batch output is flushed after each object is output, so that a process can interactively read and write from `cat-file`. With this option, the output uses normal stdio buffering; this is much more efficient when invoking `--batch-check` or `--batch-command` on a large number of objects. --unordered When `--batch-all-objects` is in use, visit objects in an order which may be more efficient for accessing the object contents than hash order. The exact details of the order are unspecified, but if you do not require a specific order, this should generally result in faster output, especially with `--batch`. Note that `cat-file` will still show each object only once, even if it is stored multiple times in the repository. --allow-unknown-type Allow `-s` or `-t` to query broken/corrupt objects of unknown type. --follow-symlinks With `--batch` or `--batch-check`, follow symlinks inside the repository when requesting objects with extended SHA-1 expressions of the form tree-ish:path-in-tree. Instead of providing output about the link itself, provide output about the linked-to object. If a symlink points outside the tree-ish (e.g. a link to `/foo` or a root-level link to `../foo`), the portion of the link which is outside the tree will be printed. This option does not (currently) work correctly when an object in the index is specified (e.g. `:link` instead of `HEAD:link`) rather than one in the tree. This option cannot (currently) be used unless `--batch` or `--batch-check` is used. For example, consider a git repository containing: ``` f: a file containing "hello\n" link: a symlink to f dir/link: a symlink to ../f plink: a symlink to ../f alink: a symlink to /etc/passwd ``` For a regular file `f`, `echo HEAD:f | git cat-file --batch` would print ``` ce013625030ba8dba906f756967f9e9ca394464a blob 6 ``` And `echo HEAD:link | git cat-file --batch --follow-symlinks` would print the same thing, as would `HEAD:dir/link`, as they both point at `HEAD:f`. Without `--follow-symlinks`, these would print data about the symlink itself. In the case of `HEAD:link`, you would see ``` 4d1ae35ba2c8ec712fa2a379db44ad639ca277bd blob 1 ``` Both `plink` and `alink` point outside the tree, so they would respectively print: ``` symlink 4 ../f ``` ``` symlink 11 /etc/passwd ``` -z Only meaningful with `--batch`, `--batch-check`, or `--batch-command`; input is NUL-delimited instead of newline-delimited. Output ------ If `-t` is specified, one of the `<type>`. If `-s` is specified, the size of the `<object>` in bytes. If `-e` is specified, no output, unless the `<object>` is malformed. If `-p` is specified, the contents of `<object>` are pretty-printed. If `<type>` is specified, the raw (though uncompressed) contents of the `<object>` will be returned. Batch output ------------ If `--batch` or `--batch-check` is given, `cat-file` will read objects from stdin, one per line, and print information about them. By default, the whole line is considered as an object, as if it were fed to [git-rev-parse[1]](git-rev-parse). When `--batch-command` is given, `cat-file` will read commands from stdin, one per line, and print information based on the command given. With `--batch-command`, the `info` command followed by an object will print information about the object the same way `--batch-check` would, and the `contents` command followed by an object prints contents in the same way `--batch` would. You can specify the information shown for each object by using a custom `<format>`. The `<format>` is copied literally to stdout for each object, with placeholders of the form `%(atom)` expanded, followed by a newline. The available atoms are: `objectname` The full hex representation of the object name. `objecttype` The type of the object (the same as `cat-file -t` reports). `objectsize` The size, in bytes, of the object (the same as `cat-file -s` reports). `objectsize:disk` The size, in bytes, that the object takes up on disk. See the note about on-disk sizes in the `CAVEATS` section below. `deltabase` If the object is stored as a delta on-disk, this expands to the full hex representation of the delta base object name. Otherwise, expands to the null OID (all zeroes). See `CAVEATS` below. `rest` If this atom is used in the output string, input lines are split at the first whitespace boundary. All characters before that whitespace are considered to be the object name; characters after that first run of whitespace (i.e., the "rest" of the line) are output in place of the `%(rest)` atom. If no format is specified, the default format is `%(objectname) %(objecttype) %(objectsize)`. If `--batch` is specified, or if `--batch-command` is used with the `contents` command, the object information is followed by the object contents (consisting of `%(objectsize)` bytes), followed by a newline. For example, `--batch` without a custom format would produce: ``` <oid> SP <type> SP <size> LF <contents> LF ``` Whereas `--batch-check='%(objectname) %(objecttype)'` would produce: ``` <oid> SP <type> LF ``` If a name is specified on stdin that cannot be resolved to an object in the repository, then `cat-file` will ignore any custom format and print: ``` <object> SP missing LF ``` If a name is specified that might refer to more than one object (an ambiguous short sha), then `cat-file` will ignore any custom format and print: ``` <object> SP ambiguous LF ``` If `--follow-symlinks` is used, and a symlink in the repository points outside the repository, then `cat-file` will ignore any custom format and print: ``` symlink SP <size> LF <symlink> LF ``` The symlink will either be absolute (beginning with a `/`), or relative to the tree root. For instance, if dir/link points to `../../foo`, then `<symlink>` will be `../foo`. `<size>` is the size of the symlink in bytes. If `--follow-symlinks` is used, the following error messages will be displayed: ``` <object> SP missing LF ``` is printed when the initial symlink requested does not exist. ``` dangling SP <size> LF <object> LF ``` is printed when the initial symlink exists, but something that it (transitive-of) points to does not. ``` loop SP <size> LF <object> LF ``` is printed for symlink loops (or any symlinks that require more than 40 link resolutions to resolve). ``` notdir SP <size> LF <object> LF ``` is printed when, during symlink resolution, a file is used as a directory name. Caveats ------- Note that the sizes of objects on disk are reported accurately, but care should be taken in drawing conclusions about which refs or objects are responsible for disk usage. The size of a packed non-delta object may be much larger than the size of objects which delta against it, but the choice of which object is the base and which is the delta is arbitrary and is subject to change during a repack. Note also that multiple copies of an object may be present in the object database; in this case, it is undefined which copy’s size or delta base will be reported. git git-show git-show ======== Name ---- git-show - Show various types of objects Synopsis -------- ``` git show [<options>] [<object>…​] ``` Description ----------- Shows one or more objects (blobs, trees, tags and commits). For commits it shows the log message and textual diff. It also presents the merge commit in a special format as produced by `git diff-tree --cc`. For tags, it shows the tag message and the referenced objects. For trees, it shows the names (equivalent to `git ls-tree` with --name-only). For plain blobs, it shows the plain contents. The command takes options applicable to the `git diff-tree` command to control how the changes the commit introduces are shown. This manual page describes only the most frequently used options. Options ------- <object>…​ The names of objects to show (defaults to `HEAD`). For a more complete list of ways to spell object names, see "SPECIFYING REVISIONS" section in [gitrevisions[7]](gitrevisions). --pretty[=<format>] --format=<format> Pretty-print the contents of the commit logs in a given format, where `<format>` can be one of `oneline`, `short`, `medium`, `full`, `fuller`, `reference`, `email`, `raw`, `format:<string>` and `tformat:<string>`. When `<format>` is none of the above, and has `%placeholder` in it, it acts as if `--pretty=tformat:<format>` were given. See the "PRETTY FORMATS" section for some additional details for each format. When `=<format>` part is omitted, it defaults to `medium`. Note: you can specify the default pretty format in the repository configuration (see [git-config[1]](git-config)). --abbrev-commit Instead of showing the full 40-byte hexadecimal commit object name, show a prefix that names the object uniquely. "--abbrev=<n>" (which also modifies diff output, if it is displayed) option can be used to specify the minimum length of the prefix. This should make "--pretty=oneline" a whole lot more readable for people using 80-column terminals. --no-abbrev-commit Show the full 40-byte hexadecimal commit object name. This negates `--abbrev-commit`, either explicit or implied by other options such as "--oneline". It also overrides the `log.abbrevCommit` variable. --oneline This is a shorthand for "--pretty=oneline --abbrev-commit" used together. --encoding=<encoding> Commit objects record the character encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user. For non plumbing commands this defaults to UTF-8. Note that if an object claims to be encoded in `X` and we are outputting in `X`, we will output the object verbatim; this means that invalid sequences in the original commit may be copied to the output. Likewise, if iconv(3) fails to convert the commit, we will quietly output the original object verbatim. --expand-tabs=<n> --expand-tabs --no-expand-tabs Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of `<n>`) in the log message before showing it in the output. `--expand-tabs` is a short-hand for `--expand-tabs=8`, and `--no-expand-tabs` is a short-hand for `--expand-tabs=0`, which disables tab expansion. By default, tabs are expanded in pretty formats that indent the log message by 4 spaces (i.e. `medium`, which is the default, `full`, and `fuller`). --notes[=<ref>] Show the notes (see [git-notes[1]](git-notes)) that annotate the commit, when showing the commit log message. This is the default for `git log`, `git show` and `git whatchanged` commands when there is no `--pretty`, `--format`, or `--oneline` option given on the command line. By default, the notes shown are from the notes refs listed in the `core.notesRef` and `notes.displayRef` variables (or corresponding environment overrides). See [git-config[1]](git-config) for more details. With an optional `<ref>` argument, use the ref to find the notes to display. The ref can specify the full refname when it begins with `refs/notes/`; when it begins with `notes/`, `refs/` and otherwise `refs/notes/` is prefixed to form a full name of the ref. Multiple --notes options can be combined to control which notes are being displayed. Examples: "--notes=foo" will show only notes from "refs/notes/foo"; "--notes=foo --notes" will show both notes from "refs/notes/foo" and from the default notes ref(s). --no-notes Do not show notes. This negates the above `--notes` option, by resetting the list of notes refs from which notes are shown. Options are parsed in the order given on the command line, so e.g. "--notes --notes=foo --no-notes --notes=bar" will only show notes from "refs/notes/bar". --show-notes[=<ref>] --[no-]standard-notes These options are deprecated. Use the above --notes/--no-notes options instead. --show-signature Check the validity of a signed commit object by passing the signature to `gpg --verify` and show the output. Pretty formats -------------- If the commit is a merge, and if the pretty-format is not `oneline`, `email` or `raw`, an additional line is inserted before the `Author:` line. This line begins with "Merge: " and the hashes of ancestral commits are printed, separated by spaces. Note that the listed commits may not necessarily be the list of the **direct** parent commits if you have limited your view of history: for example, if you are only interested in changes related to a certain directory or file. There are several built-in formats, and you can define additional formats by setting a pretty.<name> config option to either another format name, or a `format:` string, as described below (see [git-config[1]](git-config)). Here are the details of the built-in formats: * `oneline` ``` <hash> <title-line> ``` This is designed to be as compact as possible. * `short` ``` commit <hash> Author: <author> ``` ``` <title-line> ``` * `medium` ``` commit <hash> Author: <author> Date: <author-date> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `full` ``` commit <hash> Author: <author> Commit: <committer> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `fuller` ``` commit <hash> Author: <author> AuthorDate: <author-date> Commit: <committer> CommitDate: <committer-date> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `reference` ``` <abbrev-hash> (<title-line>, <short-author-date>) ``` This format is used to refer to another commit in a commit message and is the same as `--pretty='format:%C(auto)%h (%s, %ad)'`. By default, the date is formatted with `--date=short` unless another `--date` option is explicitly specified. As with any `format:` with format placeholders, its output is not affected by other options like `--decorate` and `--walk-reflogs`. * `email` ``` From <hash> <date> From: <author> Date: <author-date> Subject: [PATCH] <title-line> ``` ``` <full-commit-message> ``` * `mboxrd` Like `email`, but lines in the commit message starting with "From " (preceded by zero or more ">") are quoted with ">" so they aren’t confused as starting a new commit. * `raw` The `raw` format shows the entire commit exactly as stored in the commit object. Notably, the hashes are displayed in full, regardless of whether --abbrev or --no-abbrev are used, and `parents` information show the true parent commits, without taking grafts or history simplification into account. Note that this format affects the way commits are displayed, but not the way the diff is shown e.g. with `git log --raw`. To get full object names in a raw diff format, use `--no-abbrev`. * `format:<format-string>` The `format:<format-string>` format allows you to specify which information you want to show. It works a little bit like printf format, with the notable exception that you get a newline with `%n` instead of `\n`. E.g, `format:"The author of %h was %an, %ar%nThe title was >>%s<<%n"` would show something like this: ``` The author of fe6e0ee was Junio C Hamano, 23 hours ago The title was >>t4119: test autocomputing -p<n> for traditional diff input.<< ``` The placeholders are: + Placeholders that expand to a single literal character: *%n* newline *%%* a raw `%` *%x00* print a byte from a hex code + Placeholders that affect formatting of later placeholders: *%Cred* switch color to red *%Cgreen* switch color to green *%Cblue* switch color to blue *%Creset* reset color *%C(…​)* color specification, as described under Values in the "CONFIGURATION FILE" section of [git-config[1]](git-config). By default, colors are shown only when enabled for log output (by `color.diff`, `color.ui`, or `--color`, and respecting the `auto` settings of the former if we are going to a terminal). `%C(auto,...)` is accepted as a historical synonym for the default (e.g., `%C(auto,red)`). Specifying `%C(always,...)` will show the colors even when color is not otherwise enabled (though consider just using `--color=always` to enable color for the whole output, including this format and anything else git might color). `auto` alone (i.e. `%C(auto)`) will turn on auto coloring on the next placeholders until the color is switched again. *%m* left (`<`), right (`>`) or boundary (`-`) mark *%w([<w>[,<i1>[,<i2>]]])* switch line wrapping, like the -w option of [git-shortlog[1]](git-shortlog). *%<(<N>[,trunc|ltrunc|mtrunc])* make the next placeholder take at least N columns, padding spaces on the right if necessary. Optionally truncate at the beginning (ltrunc), the middle (mtrunc) or the end (trunc) if the output is longer than N columns. Note that truncating only works correctly with N >= 2. *%<|(<N>)* make the next placeholder take at least until Nth columns, padding spaces on the right if necessary *%>(<N>)*, *%>|(<N>)* similar to `%<(<N>)`, `%<|(<N>)` respectively, but padding spaces on the left *%>>(<N>)*, *%>>|(<N>)* similar to `%>(<N>)`, `%>|(<N>)` respectively, except that if the next placeholder takes more spaces than given and there are spaces on its left, use those spaces *%><(<N>)*, *%><|(<N>)* similar to `%<(<N>)`, `%<|(<N>)` respectively, but padding both sides (i.e. the text is centered) + Placeholders that expand to information extracted from the commit: *%H* commit hash *%h* abbreviated commit hash *%T* tree hash *%t* abbreviated tree hash *%P* parent hashes *%p* abbreviated parent hashes *%an* author name *%aN* author name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ae* author email *%aE* author email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%al* author email local-part (the part before the `@` sign) *%aL* author local-part (see `%al`) respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ad* author date (format respects --date= option) *%aD* author date, RFC2822 style *%ar* author date, relative *%at* author date, UNIX timestamp *%ai* author date, ISO 8601-like format *%aI* author date, strict ISO 8601 format *%as* author date, short format (`YYYY-MM-DD`) *%ah* author date, human style (like the `--date=human` option of [git-rev-list[1]](git-rev-list)) *%cn* committer name *%cN* committer name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ce* committer email *%cE* committer email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%cl* committer email local-part (the part before the `@` sign) *%cL* committer local-part (see `%cl`) respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%cd* committer date (format respects --date= option) *%cD* committer date, RFC2822 style *%cr* committer date, relative *%ct* committer date, UNIX timestamp *%ci* committer date, ISO 8601-like format *%cI* committer date, strict ISO 8601 format *%cs* committer date, short format (`YYYY-MM-DD`) *%ch* committer date, human style (like the `--date=human` option of [git-rev-list[1]](git-rev-list)) *%d* ref names, like the --decorate option of [git-log[1]](git-log) *%D* ref names without the " (", ")" wrapping. *%(describe[:options])* human-readable name, like [git-describe[1]](git-describe); empty string for undescribable commits. The `describe` string may be followed by a colon and zero or more comma-separated options. Descriptions can be inconsistent when tags are added or removed at the same time. - `tags[=<bool-value>]`: Instead of only considering annotated tags, consider lightweight tags as well. - `abbrev=<number>`: Instead of using the default number of hexadecimal digits (which will vary according to the number of objects in the repository with a default of 7) of the abbreviated object name, use <number> digits, or as many digits as needed to form a unique object name. - `match=<pattern>`: Only consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. - `exclude=<pattern>`: Do not consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. *%S* ref name given on the command line by which the commit was reached (like `git log --source`), only works with `git log` *%e* encoding *%s* subject *%f* sanitized subject line, suitable for a filename *%b* body *%B* raw body (unwrapped subject and body) *%N* commit notes *%GG* raw verification message from GPG for a signed commit *%G?* show "G" for a good (valid) signature, "B" for a bad signature, "U" for a good signature with unknown validity, "X" for a good signature that has expired, "Y" for a good signature made by an expired key, "R" for a good signature made by a revoked key, "E" if the signature cannot be checked (e.g. missing key) and "N" for no signature *%GS* show the name of the signer for a signed commit *%GK* show the key used to sign a signed commit *%GF* show the fingerprint of the key used to sign a signed commit *%GP* show the fingerprint of the primary key whose subkey was used to sign a signed commit *%GT* show the trust level for the key used to sign a signed commit *%gD* reflog selector, e.g., `refs/stash@{1}` or `refs/stash@{2 minutes ago}`; the format follows the rules described for the `-g` option. The portion before the `@` is the refname as given on the command line (so `git log -g refs/heads/master` would yield `refs/heads/master@{0}`). *%gd* shortened reflog selector; same as `%gD`, but the refname portion is shortened for human readability (so `refs/heads/master` becomes just `master`). *%gn* reflog identity name *%gN* reflog identity name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ge* reflog identity email *%gE* reflog identity email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%gs* reflog subject *%(trailers[:options])* display the trailers of the body as interpreted by [git-interpret-trailers[1]](git-interpret-trailers). The `trailers` string may be followed by a colon and zero or more comma-separated options. If any option is provided multiple times the last occurrence wins. - `key=<key>`: only show trailers with specified <key>. Matching is done case-insensitively and trailing colon is optional. If option is given multiple times trailer lines matching any of the keys are shown. This option automatically enables the `only` option so that non-trailer lines in the trailer block are hidden. If that is not desired it can be disabled with `only=false`. E.g., `%(trailers:key=Reviewed-by)` shows trailer lines with key `Reviewed-by`. - `only[=<bool>]`: select whether non-trailer lines from the trailer block should be included. - `separator=<sep>`: specify a separator inserted between trailer lines. When this option is not given each trailer line is terminated with a line feed character. The string <sep> may contain the literal formatting codes described above. To use comma as separator one must use `%x2C` as it would otherwise be parsed as next option. E.g., `%(trailers:key=Ticket,separator=%x2C )` shows all trailer lines whose key is "Ticket" separated by a comma and a space. - `unfold[=<bool>]`: make it behave as if interpret-trailer’s `--unfold` option was given. E.g., `%(trailers:only,unfold=true)` unfolds and shows all trailer lines. - `keyonly[=<bool>]`: only show the key part of the trailer. - `valueonly[=<bool>]`: only show the value part of the trailer. - `key_value_separator=<sep>`: specify a separator inserted between trailer lines. When this option is not given each trailer key-value pair is separated by ": ". Otherwise it shares the same semantics as `separator=<sep>` above. | | | | --- | --- | | Note | Some placeholders may depend on other options given to the revision traversal engine. For example, the `%g*` reflog options will insert an empty string unless we are traversing reflog entries (e.g., by `git log -g`). The `%d` and `%D` placeholders will use the "short" decoration format if `--decorate` was not already provided on the command line. | The boolean options accept an optional value `[=<bool-value>]`. The values `true`, `false`, `on`, `off` etc. are all accepted. See the "boolean" sub-section in "EXAMPLES" in [git-config[1]](git-config). If a boolean option is given with no value, it’s enabled. If you add a `+` (plus sign) after `%` of a placeholder, a line-feed is inserted immediately before the expansion if and only if the placeholder expands to a non-empty string. If you add a `-` (minus sign) after `%` of a placeholder, all consecutive line-feeds immediately preceding the expansion are deleted if and only if the placeholder expands to an empty string. If you add a ` ` (space) after `%` of a placeholder, a space is inserted immediately before the expansion if and only if the placeholder expands to a non-empty string. * `tformat:` The `tformat:` format works exactly like `format:`, except that it provides "terminator" semantics instead of "separator" semantics. In other words, each commit has the message terminator character (usually a newline) appended, rather than a separator placed between entries. This means that the final entry of a single-line format will be properly terminated with a new line, just as the "oneline" format does. For example: ``` $ git log -2 --pretty=format:%h 4da45bef \ | perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/' 4da45be 7134973 -- NO NEWLINE $ git log -2 --pretty=tformat:%h 4da45bef \ | perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/' 4da45be 7134973 ``` In addition, any unrecognized string that has a `%` in it is interpreted as if it has `tformat:` in front of it. For example, these two are equivalent: ``` $ git log -2 --pretty=tformat:%h 4da45bef $ git log -2 --pretty=%h 4da45bef ``` Diff formatting --------------- The options below can be used to change the way `git show` generates diff output. -p -u --patch Generate patch (see section on generating patches). -s --no-patch Suppress diff output. Useful for commands like `git show` that show the patch by default, or to cancel the effect of `--patch`. --diff-merges=(off|none|on|first-parent|1|separate|m|combined|c|dense-combined|cc|remerge|r) --no-diff-merges Specify diff format to be used for merge commits. Default is `dense-combined` unless `--first-parent` is in use, in which case `first-parent` is the default. --diff-merges=(off|none) --no-diff-merges Disable output of diffs for merge commits. Useful to override implied value. --diff-merges=on --diff-merges=m -m This option makes diff output for merge commits to be shown in the default format. `-m` will produce the output only if `-p` is given as well. The default format could be changed using `log.diffMerges` configuration parameter, which default value is `separate`. --diff-merges=first-parent --diff-merges=1 This option makes merge commits show the full diff with respect to the first parent only. --diff-merges=separate This makes merge commits show the full diff with respect to each of the parents. Separate log entry and diff is generated for each parent. --diff-merges=remerge --diff-merges=r --remerge-diff With this option, two-parent merge commits are remerged to create a temporary tree object — potentially containing files with conflict markers and such. A diff is then shown between that temporary tree and the actual merge commit. The output emitted when this option is used is subject to change, and so is its interaction with other options (unless explicitly documented). --diff-merges=combined --diff-merges=c -c With this option, diff output for a merge commit shows the differences from each of the parents to the merge result simultaneously instead of showing pairwise diff between a parent and the result one at a time. Furthermore, it lists only files which were modified from all parents. `-c` implies `-p`. --diff-merges=dense-combined --diff-merges=cc --cc With this option the output produced by `--diff-merges=combined` is further compressed by omitting uninteresting hunks whose contents in the parents have only two variants and the merge result picks one of them without modification. `--cc` implies `-p`. --combined-all-paths This flag causes combined diffs (used for merge commits) to list the name of the file from all parents. It thus only has effect when `--diff-merges=[dense-]combined` is in use, and is likely only useful if filename changes are detected (i.e. when either rename or copy detection have been requested). -U<n> --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies `--patch`. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char> --output-indicator-old=<char> --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are `+`, `-` and ' ' respectively. --raw For each commit, show a summary of changes using the raw diff format. See the "RAW OUTPUT FORMAT" section of [git-diff[1]](git-diff). This is different from showing the log itself in raw format, which you can achieve with `--format=raw`. --patch-with-raw Synonym for `-p --raw`. -t Show the tree objects in the diff output. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the `diff.algorithm` variable to a non-default value and want to use the default one, then you have to use `--diff-algorithm=default` option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by `<width>`. The width of the filename part can be limited by giving another width `<name-width>` after a comma. The width of the graph part can be limited by using `--stat-graph-width=<width>` (affects all commands generating a stat graph) or by setting `diff.statGraphWidth=<width>` (does not affect `git format-patch`). By giving a third parameter `<count>`, you can limit the output to the first `<count>` lines, followed by `...` if there are more. These parameters can also be set individually with `--stat-width=<width>`, `--stat-name-width=<name-width>` and `--stat-count=<count>`. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies `--stat`. --numstat Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. --shortstat Output only the last line of the `--stat` format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,…​>] --dirstat[=<param1,param2,…​>] Output the distribution of relative amount of changes for each sub-directory. The behavior of `--dirstat` can be customized by passing it a comma separated list of parameters. The defaults are controlled by the `diff.dirstat` configuration variable (see [git-config[1]](git-config)). The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `--dirstat=files,10,cumulative`. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>…​] Synonym for --dirstat=files,param1,param2…​ --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for `-p --stat`. -z Separate the commits with NULs instead of with new newlines. Also, when `--raw` or `--numstat` has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the [git-log[1]](git-log) manual page. --name-status Show only names and status of changed files. See the description of the `--diff-filter` option on what the status letters mean. Just like `--name-only` the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying `--submodule=short` the `short` format is used. This format just shows the names of the commits at the beginning and end of the range. When `--submodule` or `--submodule=log` is specified, the `log` format is used. This format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. When `--submodule=diff` is specified, the `diff` format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to `diff.submodule` or the `short` format if the config option is unset. --color[=<when>] Show colored diff. `--color` (i.e. without `=<when>`) is the same as `--color=always`. `<when>` can be one of `always`, `never`, or `auto`. --no-color Turn off colored diff. It is the same as `--color=never`. --color-moved[=<mode>] Moved lines of code are colored differently. The <mode> defaults to `no` if the option is not given and to `zebra` if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for `zebra`. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with `color.diff.newMoved`. Similarly `color.diff.oldMoved` will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the `color.diff.{old,new}Moved` color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in `blocks` mode. The blocks are painted using either the `color.diff.{old,new}Moved` color or `color.diff.{old,new}MovedAlternative`. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to `zebra`, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. `dimmed_zebra` is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as `--color-moved=no`. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for `--color-moved`. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as `--color-moved-ws=no`. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see `--word-diff-regex` below. The <mode> defaults to `plain`, and must be one of: color Highlight changed words using only colors. Implies `--color`. plain Show words as `[-removed-]` and `{+added+}`. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a `+`/`-`/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde `~` on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies `--word-diff` unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append `|[^[:space:]]` to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, `--word-diff-regex=.` will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see [gitattributes[5]](gitattributes) or [git-config[1]](git-config). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to `--word-diff=color` plus (if a regex was specified) `--word-diff-regex=<regex>`. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by `core.whitespace` configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. When this option is not given, and the configuration variable `diff.wsErrorHighlight` is not set, only whitespace errors in `new` lines are highlighted. The whitespace errors are colored with `color.diff.whitespace`. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to `--full-index`, output a binary diff that can be applied with `git-apply`. Implies `--patch`. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. In diff-patch output format, `--full-index` takes higher precedence, i.e. if `--full-index` is specified, full blob names will be shown regardless of `--abbrev`. Non default number of digits can be specified with `--abbrev=<n>`. -B[<n>][/<m>] --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number `m` controls this aspect of the -B option (defaults to 60%). `-B/70%` specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number `n` controls this aspect of the -B option (defaults to 50%). `-B20%` specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>] --find-renames[=<n>] If generating diffs, detect and report renames for each commit. For following files across renames while traversing history, see `--follow`. If `n` is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, `-M90%` means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a `%` sign, the number is to be read as a fraction, with a decimal point before it. I.e., `-M5` becomes 0.5, and is thus the same as `-M50%`. Similarly, `-M05` is the same as `-M5%`. To limit detection to exact renames, use `-M100%`. The default similarity index is 50%. -C[<n>] --find-copies[=<n>] Detect copies as well as renames. See also `--find-copies-harder`. If `n` is specified, it has the same meaning as for `-M<n>`. --find-copies-harder For performance reasons, by default, `-C` option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one `-C` option has the same effect. -D --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and `/dev/null`. The resulting patch is not meant to be applied with `patch` or `git apply`; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with `-B`, omit also the preimage in the deletion part of a delete/create pair. -l<num> The `-M` and `-C` options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)…​[\*]] Select only files that are Added (`A`), Copied (`C`), Deleted (`D`), Modified (`M`), Renamed (`R`), have their type (i.e. regular file, symlink, submodule, …​) changed (`T`), are Unmerged (`U`), are Unknown (`X`), or have had their pairing Broken (`B`). Any combination of the filter characters (including none) can be used. When `*` (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. `--diff-filter=ad` excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into `-S`, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between `-S<regex> --pickaxe-regex` and `-G<regex>`, consider a commit with the following diff in the same file: ``` + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); ``` While `git log -G"frotz\(nitfol"` will show this commit, `git log -S"frotz\(nitfol" --pickaxe-regex` will not (because the number of occurrences of that string did not change). Unless `--text` is supplied patches of binary files without a textconv filter will be ignored. See the `pickaxe` entry in [gitdiffcore[7]](gitdiffcore) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to `-S`, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the `-t` option in `git-log` to also find trees. --pickaxe-all When `-S` or `-G` finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to `-S` as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the `diff.orderFile` configuration variable (see [git-config[1]](git-config)). To cancel `diff.orderFile`, use `-O/dev/null`. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: * Blank lines are ignored, so they can be used as separators for readability. * Lines starting with a hash ("`#`") are ignored, so they can be used for comments. Add a backslash ("`\`") to the beginning of the pattern if it starts with a hash. * Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM\_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "`foo*bar`" matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`". --skip-to=<file> --rotate-to=<file> Discard the files before the named <file> from the output (i.e. `skip to`), or move them to the end of the output (i.e. `rotate to`). These were invented primarily for use of the `git difftool` command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>] --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. `--no-relative` can be used to countermand both `diff.relative` config option and previous `--relative`. -a --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex> --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to `diff.interHunkContext` or 0 if the config option is unset. -W --function-context Show whole function as context lines for each change. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with [gitattributes[5]](gitattributes), you need to use this option with [git-log[1]](git-log) and friends. --no-ext-diff Disallow external diff drivers. --textconv --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See [gitattributes[5]](gitattributes) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for [git-diff[1]](git-diff) and [git-log[1]](git-log), but not for [git-format-patch[1]](git-format-patch) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the `ignore` option in [git-config[1]](git-config) or [gitmodules[5]](gitmodules). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with `--ita-visible-in-index`. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also [gitdiffcore[7]](gitdiffcore). Generating patch text with -p ----------------------------- Running [git-diff[1]](git-diff), [git-log[1]](git-log), [git-show[1]](git-show), [git-diff-index[1]](git-diff-index), [git-diff-tree[1]](git-diff-tree), or [git-diff-files[1]](git-diff-files) with the `-p` option produces patch text. You can customize the creation of patch text via the `GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables (see [git[1]](git)), and the `diff` attribute (see [gitattributes[5]](gitattributes)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded with a "git diff" header that looks like this: ``` diff --git a/file1 b/file2 ``` The `a/` and `b/` filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, `/dev/null` is `not` used in place of the `a/` or `b/` filenames. When rename/copy is involved, `file1` and `file2` show the name of the source file of the rename/copy and the name of the file that rename/copy produces, respectively. 2. It is followed by one or more extended header lines: ``` old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> ``` File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the `a/` and `b/` prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). 4. All the `file1` files in the output refer to files before the commit, and all the `file2` files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: ``` diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a ``` 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in [gitattributes[5]](gitattributes) for details of how to tailor to this to specific languages. Combined diff format -------------------- Any diff-generating command can take the `-c` or `--cc` option to produce a `combined diff` when showing a merge. This is the default format when showing merges with [git-diff[1]](git-diff) or [git-show[1]](git-show). Note also that you can give suitable `--diff-merges` option to any of these commands to force generation of diffs in specific format. A "combined diff" format looks like this: ``` diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); ``` 1. It is preceded with a "git diff" header, that looks like this (when the `-c` option is used): ``` diff --combined file ``` or like this (when the `--cc` option is used): ``` diff --cc file ``` 2. It is followed by one or more extended header lines (this example shows a merge with two parents): ``` index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> ``` The `mode <mode>,<mode>..<mode>` line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected contents movement (renames and copying detection) are designed to work with diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by two-line from-file/to-file header ``` --- a/file +++ b/file ``` Similar to two-line header for traditional `unified` diff format, `/dev/null` is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file you get a N+1 line from-file/to-file header, where N is the number of parents in the merge commit ``` --- a/file --- a/file --- a/file +++ b/file ``` This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to `patch -p1`. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended `index` header: ``` @@@ <from-file-range> <from-file-range> <to-file-range> @@@ ``` There are (number of parents + 1) `@` characters in the chunk header for combined diff format. Unlike the traditional `unified` diff format, which shows two files A and B with a single column that has `-` (minus — appears in A but removed in B), `+` (plus — missing in A but added to B), or `" "` (space — unchanged) prefix, this format compares two or more files file1, file2,…​ with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how X’s line is different from it. A `-` character in the column N means that the line appears in fileN but it does not appear in the result. A `+` character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two `-` removals from both file1 and file2, plus `++` to mean one line that was added does not appear in either file1 or file2). Also eight other lines are the same from file1 but do not appear in file2 (hence prefixed with `+`). When shown by `git diff-tree -c`, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by `git diff-files -c`, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). Examples -------- `git show v1.0.0` Shows the tag `v1.0.0`, along with the object the tags points at. `git show v1.0.0^{tree}` Shows the tree pointed to by the tag `v1.0.0`. `git show -s --format=%s v1.0.0^{commit}` Shows the subject of the commit pointed to by the tag `v1.0.0`. `git show next~10:Documentation/README` Shows the contents of the file `Documentation/README` as they were current in the 10th last commit of the branch `next`. `git show master:Makefile master:t/Makefile` Concatenates the contents of said Makefiles in the head of the branch `master`. Discussion ---------- Git is to some extent character encoding agnostic. * The contents of the blob objects are uninterpreted sequences of bytes. There is no encoding translation at the core level. * Path names are encoded in UTF-8 normalization form C. This applies to tree objects, the index file, ref names, as well as path names in command line arguments, environment variables and config files (`.git/config` (see [git-config[1]](git-config)), [gitignore[5]](gitignore), [gitattributes[5]](gitattributes) and [gitmodules[5]](gitmodules)). Note that Git at the core level treats path names simply as sequences of non-NUL bytes, there are no path name encoding conversions (except on Mac and Windows). Therefore, using non-ASCII path names will mostly work even on platforms and file systems that use legacy extended ASCII encodings. However, repositories created on such systems will not work properly on UTF-8-based systems (e.g. Linux, Mac, Windows) and vice versa. Additionally, many Git-based tools simply assume path names to be UTF-8 and will fail to display other encodings correctly. * Commit log messages are typically encoded in UTF-8, but other extended ASCII encodings are also supported. This includes ISO-8859-x, CP125x and many others, but `not` UTF-16/32, EBCDIC and CJK multi-byte encodings (GBK, Shift-JIS, Big5, EUC-x, CP9xx etc.). Although we encourage that the commit log messages are encoded in UTF-8, both the core and Git Porcelain are designed not to force UTF-8 on projects. If all participants of a particular project find it more convenient to use legacy encodings, Git does not forbid it. However, there are a few things to keep in mind. 1. `git commit` and `git commit-tree` issues a warning if the commit log message given to it does not look like a valid UTF-8 string, unless you explicitly say your project uses a legacy encoding. The way to say this is to have `i18n.commitEncoding` in `.git/config` file, like this: ``` [i18n] commitEncoding = ISO-8859-1 ``` Commit objects created with the above setting record the value of `i18n.commitEncoding` in its `encoding` header. This is to help other people who look at them later. Lack of this header implies that the commit log message is encoded in UTF-8. 2. `git log`, `git show`, `git blame` and friends look at the `encoding` header of a commit object, and try to re-code the log message into UTF-8 unless otherwise specified. You can specify the desired output encoding with `i18n.logOutputEncoding` in `.git/config` file, like this: ``` [i18n] logOutputEncoding = ISO-8859-1 ``` If you do not have this configuration variable, the value of `i18n.commitEncoding` is used instead. Note that we deliberately chose not to re-code the commit log message when a commit is made to force UTF-8 at the commit object level, because re-coding to UTF-8 is not necessarily a reversible operation.
programming_docs
git git-archive git-archive =========== Name ---- git-archive - Create an archive of files from a named tree Synopsis -------- ``` git archive [--format=<fmt>] [--list] [--prefix=<prefix>/] [<extra>] [-o <file> | --output=<file>] [--worktree-attributes] [--remote=<repo> [--exec=<git-upload-archive>]] <tree-ish> [<path>…​] ``` Description ----------- Creates an archive of the specified format containing the tree structure for the named tree, and writes it out to the standard output. If <prefix> is specified it is prepended to the filenames in the archive. `git archive` behaves differently when given a tree ID versus when given a commit ID or tag ID. In the first case the current time is used as the modification time of each file in the archive. In the latter case the commit time as recorded in the referenced commit object is used instead. Additionally the commit ID is stored in a global extended pax header if the tar format is used; it can be extracted using `git get-tar-commit-id`. In ZIP files it is stored as a file comment. Options ------- --format=<fmt> Format of the resulting archive. Possible values are `tar`, `zip`, `tar.gz`, `tgz`, and any format defined using the configuration option `tar.<format>.command`. If `--format` is not given, and the output file is specified, the format is inferred from the filename if possible (e.g. writing to `foo.zip` makes the output to be in the `zip` format). Otherwise the output format is `tar`. -l --list Show all available formats. -v --verbose Report progress to stderr. --prefix=<prefix>/ Prepend <prefix>/ to paths in the archive. Can be repeated; its rightmost value is used for all tracked files. See below which value gets used by `--add-file` and `--add-virtual-file`. -o <file> --output=<file> Write the archive to <file> instead of stdout. --add-file=<file> Add a non-tracked file to the archive. Can be repeated to add multiple files. The path of the file in the archive is built by concatenating the value of the last `--prefix` option (if any) before this `--add-file` and the basename of <file>. --add-virtual-file=<path>:<content> Add the specified contents to the archive. Can be repeated to add multiple files. The path of the file in the archive is built by concatenating the value of the last `--prefix` option (if any) before this `--add-virtual-file` and `<path>`. The `<path>` argument can start and end with a literal double-quote character; the contained file name is interpreted as a C-style string, i.e. the backslash is interpreted as escape character. The path must be quoted if it contains a colon, to avoid the colon from being misinterpreted as the separator between the path and the contents, or if the path begins or ends with a double-quote character. The file mode is limited to a regular file, and the option may be subject to platform-dependent command-line limits. For non-trivial cases, write an untracked file and use `--add-file` instead. --worktree-attributes Look for attributes in .gitattributes files in the working tree as well (see [ATTRIBUTES](#ATTRIBUTES)). <extra> This can be any options that the archiver backend understands. See next section. --remote=<repo> Instead of making a tar archive from the local repository, retrieve a tar archive from a remote repository. Note that the remote repository may place restrictions on which sha1 expressions may be allowed in `<tree-ish>`. See [git-upload-archive[1]](git-upload-archive) for details. --exec=<git-upload-archive> Used with --remote to specify the path to the `git-upload-archive` on the remote side. <tree-ish> The tree or commit to produce an archive for. <path> Without an optional path parameter, all files and subdirectories of the current working directory are included in the archive. If one or more paths are specified, only these are included. Backend extra options --------------------- ### zip -<digit> Specify compression level. Larger values allow the command to spend more time to compress to smaller size. Supported values are from `-0` (store-only) to `-9` (best ratio). Default is `-6` if not given. ### tar -<number> Specify compression level. The value will be passed to the compression command configured in `tar.<format>.command`. See manual page of the configured command for the list of supported levels and the default level if this option isn’t specified. Configuration ------------- tar.umask This variable can be used to restrict the permission bits of tar archive entries. The default is 0002, which turns off the world write bit. The special value "user" indicates that the archiving user’s umask will be used instead. See umask(2) for details. If `--remote` is used then only the configuration of the remote repository takes effect. tar.<format>.command This variable specifies a shell command through which the tar output generated by `git archive` should be piped. The command is executed using the shell with the generated tar file on its standard input, and should produce the final output on its standard output. Any compression-level options will be passed to the command (e.g., `-9`). The `tar.gz` and `tgz` formats are defined automatically and use the magic command `git archive gzip` by default, which invokes an internal implementation of gzip. tar.<format>.remote If true, enable the format for use by remote clients via [git-upload-archive[1]](git-upload-archive). Defaults to false for user-defined formats, but true for the `tar.gz` and `tgz` formats. Attributes ---------- export-ignore Files and directories with the attribute export-ignore won’t be added to archive files. See [gitattributes[5]](gitattributes) for details. export-subst If the attribute export-subst is set for a file then Git will expand several placeholders when adding this file to an archive. See [gitattributes[5]](gitattributes) for details. Note that attributes are by default taken from the `.gitattributes` files in the tree that is being archived. If you want to tweak the way the output is generated after the fact (e.g. you committed without adding an appropriate export-ignore in its `.gitattributes`), adjust the checked out `.gitattributes` file as necessary and use `--worktree-attributes` option. Alternatively you can keep necessary attributes that should apply while archiving any tree in your `$GIT_DIR/info/attributes` file. Examples -------- `git archive --format=tar --prefix=junk/ HEAD | (cd /var/tmp/ && tar xf -)` Create a tar archive that contains the contents of the latest commit on the current branch, and extract it in the `/var/tmp/junk` directory. `git archive --format=tar --prefix=git-1.4.0/ v1.4.0 | gzip >git-1.4.0.tar.gz` Create a compressed tarball for v1.4.0 release. `git archive --format=tar.gz --prefix=git-1.4.0/ v1.4.0 >git-1.4.0.tar.gz` Same as above, but using the builtin tar.gz handling. `git archive --prefix=git-1.4.0/ -o git-1.4.0.tar.gz v1.4.0` Same as above, but the format is inferred from the output file. `git archive --format=tar --prefix=git-1.4.0/ v1.4.0^{tree} | gzip >git-1.4.0.tar.gz` Create a compressed tarball for v1.4.0 release, but without a global extended pax header. `git archive --format=zip --prefix=git-docs/ HEAD:Documentation/ > git-1.4.0-docs.zip` Put everything in the current head’s Documentation/ directory into `git-1.4.0-docs.zip`, with the prefix `git-docs/`. `git archive -o latest.zip HEAD` Create a Zip archive that contains the contents of the latest commit on the current branch. Note that the output format is inferred by the extension of the output file. `git archive -o latest.tar --prefix=build/ --add-file=configure --prefix= HEAD` Creates a tar archive that contains the contents of the latest commit on the current branch with no prefix and the untracked file `configure` with the prefix `build/`. `git config tar.tar.xz.command "xz -c"` Configure a "tar.xz" format for making LZMA-compressed tarfiles. You can use it specifying `--format=tar.xz`, or by creating an output file like `-o foo.tar.xz`. See also -------- [gitattributes[5]](gitattributes) git gitsubmodules gitsubmodules ============= Name ---- gitsubmodules - Mounting one repository inside another Synopsis -------- ``` .gitmodules, $GIT_DIR/config ``` ``` git submodule git <command> --recurse-submodules ``` Description ----------- A submodule is a repository embedded inside another repository. The submodule has its own history; the repository it is embedded in is called a superproject. On the filesystem, a submodule usually (but not always - see FORMS below) consists of (i) a Git directory located under the `$GIT_DIR/modules/` directory of its superproject, (ii) a working directory inside the superproject’s working directory, and a `.git` file at the root of the submodule’s working directory pointing to (i). Assuming the submodule has a Git directory at `$GIT_DIR/modules/foo/` and a working directory at `path/to/bar/`, the superproject tracks the submodule via a `gitlink` entry in the tree at `path/to/bar` and an entry in its `.gitmodules` file (see [gitmodules[5]](gitmodules)) of the form `submodule.foo.path = path/to/bar`. The `gitlink` entry contains the object name of the commit that the superproject expects the submodule’s working directory to be at. The section `submodule.foo.*` in the `.gitmodules` file gives additional hints to Git’s porcelain layer. For example, the `submodule.foo.url` setting specifies where to obtain the submodule. Submodules can be used for at least two different use cases: 1. Using another project while maintaining independent history. Submodules allow you to contain the working tree of another project within your own working tree while keeping the history of both projects separate. Also, since submodules are fixed to an arbitrary version, the other project can be independently developed without affecting the superproject, allowing the superproject project to fix itself to new versions only when desired. 2. Splitting a (logically single) project into multiple repositories and tying them back together. This can be used to overcome current limitations of Git’s implementation to have finer grained access: * Size of the Git repository: In its current form Git scales up poorly for large repositories containing content that is not compressed by delta computation between trees. For example, you can use submodules to hold large binary assets and these repositories can be shallowly cloned such that you do not have a large history locally. * Transfer size: In its current form Git requires the whole working tree present. It does not allow partial trees to be transferred in fetch or clone. If the project you work on consists of multiple repositories tied together as submodules in a superproject, you can avoid fetching the working trees of the repositories you are not interested in. * Access control: By restricting user access to submodules, this can be used to implement read/write policies for different users. The configuration of submodules ------------------------------- Submodule operations can be configured using the following mechanisms (from highest to lowest precedence): * The command line for those commands that support taking submodules as part of their pathspecs. Most commands have a boolean flag `--recurse-submodules` which specify whether to recurse into submodules. Examples are `grep` and `checkout`. Some commands take enums, such as `fetch` and `push`, where you can specify how submodules are affected. * The configuration inside the submodule. This includes `$GIT_DIR/config` in the submodule, but also settings in the tree such as a `.gitattributes` or `.gitignore` files that specify behavior of commands inside the submodule. For example an effect from the submodule’s `.gitignore` file would be observed when you run `git status --ignore-submodules=none` in the superproject. This collects information from the submodule’s working directory by running `status` in the submodule while paying attention to the `.gitignore` file of the submodule. The submodule’s `$GIT_DIR/config` file would come into play when running `git push --recurse-submodules=check` in the superproject, as this would check if the submodule has any changes not published to any remote. The remotes are configured in the submodule as usual in the `$GIT_DIR/config` file. * The configuration file `$GIT_DIR/config` in the superproject. Git only recurses into active submodules (see "ACTIVE SUBMODULES" section below). If the submodule is not yet initialized, then the configuration inside the submodule does not exist yet, so where to obtain the submodule from is configured here for example. * The `.gitmodules` file inside the superproject. A project usually uses this file to suggest defaults for the upstream collection of repositories for the mapping that is required between a submodule’s name and its path. This file mainly serves as the mapping between the name and path of submodules in the superproject, such that the submodule’s Git directory can be located. If the submodule has never been initialized, this is the only place where submodule configuration is found. It serves as the last fallback to specify where to obtain the submodule from. Forms ----- Submodules can take the following forms: * The basic form described in DESCRIPTION with a Git directory, a working directory, a `gitlink`, and a `.gitmodules` entry. * "Old-form" submodule: A working directory with an embedded `.git` directory, and the tracking `gitlink` and `.gitmodules` entry in the superproject. This is typically found in repositories generated using older versions of Git. It is possible to construct these old form repositories manually. When deinitialized or deleted (see below), the submodule’s Git directory is automatically moved to `$GIT_DIR/modules/<name>/` of the superproject. * Deinitialized submodule: A `gitlink`, and a `.gitmodules` entry, but no submodule working directory. The submodule’s Git directory may be there as after deinitializing the Git directory is kept around. The directory which is supposed to be the working directory is empty instead. A submodule can be deinitialized by running `git submodule deinit`. Besides emptying the working directory, this command only modifies the superproject’s `$GIT_DIR/config` file, so the superproject’s history is not affected. This can be undone using `git submodule init`. * Deleted submodule: A submodule can be deleted by running `git rm <submodule path> && git commit`. This can be undone using `git revert`. The deletion removes the superproject’s tracking data, which are both the `gitlink` entry and the section in the `.gitmodules` file. The submodule’s working directory is removed from the file system, but the Git directory is kept around as it to make it possible to checkout past commits without requiring fetching from another repository. To completely remove a submodule, manually delete `$GIT_DIR/modules/<name>/`. Active submodules ----------------- A submodule is considered active, 1. if `submodule.<name>.active` is set to `true` or 2. if the submodule’s path matches the pathspec in `submodule.active` or 3. if `submodule.<name>.url` is set. and these are evaluated in this order. For example: ``` [submodule "foo"] active = false url = https://example.org/foo [submodule "bar"] active = true url = https://example.org/bar [submodule "baz"] url = https://example.org/baz ``` In the above config only the submodule `bar` and `baz` are active, `bar` due to (1) and `baz` due to (3). `foo` is inactive because (1) takes precedence over (3) Note that (3) is a historical artefact and will be ignored if the (1) and (2) specify that the submodule is not active. In other words, if we have a `submodule.<name>.active` set to `false` or if the submodule’s path is excluded in the pathspec in `submodule.active`, the url doesn’t matter whether it is present or not. This is illustrated in the example that follows. ``` [submodule "foo"] active = true url = https://example.org/foo [submodule "bar"] url = https://example.org/bar [submodule "baz"] url = https://example.org/baz [submodule "bob"] ignore = true [submodule] active = b* active = :(exclude) baz ``` In here all submodules except `baz` (foo, bar, bob) are active. `foo` due to its own active flag and all the others due to the submodule active pathspec, which specifies that any submodule starting with `b` except `baz` are also active, regardless of the presence of the .url field. Workflow for a third party library ---------------------------------- ``` # Add a submodule git submodule add <URL> <path> ``` ``` # Occasionally update the submodule to a new version: git -C <path> checkout <new version> git add <path> git commit -m "update submodule to new version" ``` ``` # See the list of submodules in a superproject git submodule status ``` ``` # See FORMS on removing submodules ``` Workflow for an artificially split repo --------------------------------------- ``` # Enable recursion for relevant commands, such that # regular commands recurse into submodules by default git config --global submodule.recurse true ``` ``` # Unlike most other commands below, clone still needs # its own recurse flag: git clone --recurse <URL> <directory> cd <directory> ``` ``` # Get to know the code: git grep foo git ls-files --recurse-submodules ``` | | | | --- | --- | | Note | `git ls-files` also requires its own `--recurse-submodules` flag. | ``` # Get new code git fetch git pull --rebase ``` ``` # Change worktree git checkout git reset ``` Implementation details ---------------------- When cloning or pulling a repository containing submodules the submodules will not be checked out by default; you can instruct `clone` to recurse into submodules. The `init` and `update` subcommands of `git submodule` will maintain submodules checked out and at an appropriate revision in your working tree. Alternatively you can set `submodule.recurse` to have `checkout` recursing into submodules (note that `submodule.recurse` also affects other Git commands, see [git-config[1]](git-config) for a complete list). See also -------- [git-submodule[1]](git-submodule), [gitmodules[5]](gitmodules). git git-symbolic-ref git-symbolic-ref ================ Name ---- git-symbolic-ref - Read, modify and delete symbolic refs Synopsis -------- ``` git symbolic-ref [-m <reason>] <name> <ref> git symbolic-ref [-q] [--short] [--no-recurse] <name> git symbolic-ref --delete [-q] <name> ``` Description ----------- Given one argument, reads which branch head the given symbolic ref refers to and outputs its path, relative to the `.git/` directory. Typically you would give `HEAD` as the <name> argument to see which branch your working tree is on. Given two arguments, creates or updates a symbolic ref <name> to point at the given branch <ref>. Given `--delete` and an additional argument, deletes the given symbolic ref. A symbolic ref is a regular file that stores a string that begins with `ref: refs/`. For example, your `.git/HEAD` is a regular file whose contents is `ref: refs/heads/master`. Options ------- -d --delete Delete the symbolic ref <name>. -q --quiet Do not issue an error message if the <name> is not a symbolic ref but a detached HEAD; instead exit with non-zero status silently. --short When showing the value of <name> as a symbolic ref, try to shorten the value, e.g. from `refs/heads/master` to `master`. --recurse --no-recurse When showing the value of <name> as a symbolic ref, if <name> refers to another symbolic ref, follow such a chain of symbolic refs until the result no longer points at a symbolic ref (`--recurse`, which is the default). `--no-recurse` stops after dereferencing only a single level of symbolic ref. -m Update the reflog for <name> with <reason>. This is valid only when creating or updating a symbolic ref. Notes ----- In the past, `.git/HEAD` was a symbolic link pointing at `refs/heads/master`. When we wanted to switch to another branch, we did `ln -sf refs/heads/newbranch .git/HEAD`, and when we wanted to find out which branch we are on, we did `readlink .git/HEAD`. But symbolic links are not entirely portable, so they are now deprecated and symbolic refs (as described above) are used by default. `git symbolic-ref` will exit with status 0 if the contents of the symbolic ref were printed correctly, with status 1 if the requested name is not a symbolic ref, or 128 if another error occurs.
programming_docs
git git-update-index git-update-index ================ Name ---- git-update-index - Register file contents in the working tree to the index Synopsis -------- ``` git update-index [--add] [--remove | --force-remove] [--replace] [--refresh] [-q] [--unmerged] [--ignore-missing] [(--cacheinfo <mode>,<object>,<file>)…​] [--chmod=(+|-)x] [--[no-]assume-unchanged] [--[no-]skip-worktree] [--[no-]ignore-skip-worktree-entries] [--[no-]fsmonitor-valid] [--ignore-submodules] [--[no-]split-index] [--[no-|test-|force-]untracked-cache] [--[no-]fsmonitor] [--really-refresh] [--unresolve] [--again | -g] [--info-only] [--index-info] [-z] [--stdin] [--index-version <n>] [--verbose] [--] [<file>…​] ``` Description ----------- Modifies the index. Each file mentioned is updated into the index and any `unmerged` or `needs updating` state is cleared. See also [git-add[1]](git-add) for a more user-friendly way to do some of the most common operations on the index. The way `git update-index` handles files it is told about can be modified using the various options: Options ------- --add If a specified file isn’t in the index already then it’s added. Default behaviour is to ignore new files. --remove If a specified file is in the index but is missing then it’s removed. Default behavior is to ignore removed file. --refresh Looks at the current index and checks to see if merges or updates are needed by checking stat() information. -q Quiet. If --refresh finds that the index needs an update, the default behavior is to error out. This option makes `git update-index` continue anyway. --ignore-submodules Do not try to update submodules. This option is only respected when passed before --refresh. --unmerged If --refresh finds unmerged changes in the index, the default behavior is to error out. This option makes `git update-index` continue anyway. --ignore-missing Ignores missing files during a --refresh --cacheinfo <mode>,<object>,<path> --cacheinfo <mode> <object> <path> Directly insert the specified info into the index. For backward compatibility, you can also give these three arguments as three separate parameters, but new users are encouraged to use a single-parameter form. --index-info Read index information from stdin. --chmod=(+|-)x Set the execute permissions on the updated files. --[no-]assume-unchanged When this flag is specified, the object names recorded for the paths are not updated. Instead, this option sets/unsets the "assume unchanged" bit for the paths. When the "assume unchanged" bit is on, the user promises not to change the file and allows Git to assume that the working tree file matches what is recorded in the index. If you want to change the working tree file, you need to unset the bit to tell Git. This is sometimes helpful when working with a big project on a filesystem that has very slow lstat(2) system call (e.g. cifs). Git will fail (gracefully) in case it needs to modify this file in the index e.g. when merging in a commit; thus, in case the assumed-untracked file is changed upstream, you will need to handle the situation manually. --really-refresh Like `--refresh`, but checks stat information unconditionally, without regard to the "assume unchanged" setting. --[no-]skip-worktree When one of these flags is specified, the object name recorded for the paths are not updated. Instead, these options set and unset the "skip-worktree" bit for the paths. See section "Skip-worktree bit" below for more information. --[no-]ignore-skip-worktree-entries Do not remove skip-worktree (AKA "index-only") entries even when the `--remove` option was specified. --[no-]fsmonitor-valid When one of these flags is specified, the object name recorded for the paths are not updated. Instead, these options set and unset the "fsmonitor valid" bit for the paths. See section "File System Monitor" below for more information. -g --again Runs `git update-index` itself on the paths whose index entries are different from those from the `HEAD` commit. --unresolve Restores the `unmerged` or `needs updating` state of a file during a merge if it was cleared by accident. --info-only Do not create objects in the object database for all <file> arguments that follow this flag; just insert their object IDs into the index. --force-remove Remove the file from the index even when the working directory still has such a file. (Implies --remove.) --replace By default, when a file `path` exists in the index, `git update-index` refuses an attempt to add `path/file`. Similarly if a file `path/file` exists, a file `path` cannot be added. With --replace flag, existing entries that conflict with the entry being added are automatically removed with warning messages. --stdin Instead of taking list of paths from the command line, read list of paths from the standard input. Paths are separated by LF (i.e. one path per line) by default. --verbose Report what is being added and removed from index. --index-version <n> Write the resulting index out in the named on-disk format version. Supported versions are 2, 3 and 4. The current default version is 2 or 3, depending on whether extra features are used, such as `git add -N`. Version 4 performs a simple pathname compression that reduces index size by 30%-50% on large repositories, which results in faster load time. Version 4 is relatively young (first released in 1.8.0 in October 2012). Other Git implementations such as JGit and libgit2 may not support it yet. -z Only meaningful with `--stdin` or `--index-info`; paths are separated with NUL character instead of LF. --split-index --no-split-index Enable or disable split index mode. If split-index mode is already enabled and `--split-index` is given again, all changes in $GIT\_DIR/index are pushed back to the shared index file. These options take effect whatever the value of the `core.splitIndex` configuration variable (see [git-config[1]](git-config)). But a warning is emitted when the change goes against the configured value, as the configured value will take effect next time the index is read and this will remove the intended effect of the option. --untracked-cache --no-untracked-cache Enable or disable untracked cache feature. Please use `--test-untracked-cache` before enabling it. These options take effect whatever the value of the `core.untrackedCache` configuration variable (see [git-config[1]](git-config)). But a warning is emitted when the change goes against the configured value, as the configured value will take effect next time the index is read and this will remove the intended effect of the option. --test-untracked-cache Only perform tests on the working directory to make sure untracked cache can be used. You have to manually enable untracked cache using `--untracked-cache` or `--force-untracked-cache` or the `core.untrackedCache` configuration variable afterwards if you really want to use it. If a test fails the exit code is 1 and a message explains what is not working as needed, otherwise the exit code is 0 and OK is printed. --force-untracked-cache Same as `--untracked-cache`. Provided for backwards compatibility with older versions of Git where `--untracked-cache` used to imply `--test-untracked-cache` but this option would enable the extension unconditionally. --fsmonitor --no-fsmonitor Enable or disable files system monitor feature. These options take effect whatever the value of the `core.fsmonitor` configuration variable (see [git-config[1]](git-config)). But a warning is emitted when the change goes against the configured value, as the configured value will take effect next time the index is read and this will remove the intended effect of the option. -- Do not interpret any more arguments as options. <file> Files to act on. Note that files beginning with `.` are discarded. This includes `./file` and `dir/./file`. If you don’t want this, then use cleaner names. The same applies to directories ending `/` and paths with `//` Using --refresh --------------- `--refresh` does not calculate a new sha1 file or bring the index up to date for mode/content changes. But what it **does** do is to "re-match" the stat information of a file with the index, so that you can refresh the index for a file that hasn’t been changed but where the stat entry is out of date. For example, you’d want to do this after doing a `git read-tree`, to link up the stat index details with the proper files. Using --cacheinfo or --info-only -------------------------------- `--cacheinfo` is used to register a file that is not in the current working directory. This is useful for minimum-checkout merging. To pretend you have a file at path with mode and sha1, say: ``` $ git update-index --add --cacheinfo <mode>,<sha1>,<path> ``` `--info-only` is used to register files without placing them in the object database. This is useful for status-only repositories. Both `--cacheinfo` and `--info-only` behave similarly: the index is updated but the object database isn’t. `--cacheinfo` is useful when the object is in the database but the file isn’t available locally. `--info-only` is useful when the file is available, but you do not wish to update the object database. Using --index-info ------------------ `--index-info` is a more powerful mechanism that lets you feed multiple entry definitions from the standard input, and designed specifically for scripts. It can take inputs of three formats: 1. mode SP type SP sha1 TAB path This format is to stuff `git ls-tree` output into the index. 2. mode SP sha1 SP stage TAB path This format is to put higher order stages into the index file and matches `git ls-files --stage` output. 3. mode SP sha1 TAB path This format is no longer produced by any Git command, but is and will continue to be supported by `update-index --index-info`. To place a higher stage entry to the index, the path should first be removed by feeding a mode=0 entry for the path, and then feeding necessary input lines in the third format. For example, starting with this index: ``` $ git ls-files -s 100644 8a1218a1024a212bb3db30becd860315f9f3ac52 0 frotz ``` you can feed the following input to `--index-info`: ``` $ git update-index --index-info 0 0000000000000000000000000000000000000000 frotz 100644 8a1218a1024a212bb3db30becd860315f9f3ac52 1 frotz 100755 8a1218a1024a212bb3db30becd860315f9f3ac52 2 frotz ``` The first line of the input feeds 0 as the mode to remove the path; the SHA-1 does not matter as long as it is well formatted. Then the second and third line feeds stage 1 and stage 2 entries for that path. After the above, we would end up with this: ``` $ git ls-files -s 100644 8a1218a1024a212bb3db30becd860315f9f3ac52 1 frotz 100755 8a1218a1024a212bb3db30becd860315f9f3ac52 2 frotz ``` Using “assume unchanged” bit ---------------------------- Many operations in Git depend on your filesystem to have an efficient `lstat(2)` implementation, so that `st_mtime` information for working tree files can be cheaply checked to see if the file contents have changed from the version recorded in the index file. Unfortunately, some filesystems have inefficient `lstat(2)`. If your filesystem is one of them, you can set "assume unchanged" bit to paths you have not changed to cause Git not to do this check. Note that setting this bit on a path does not mean Git will check the contents of the file to see if it has changed — it makes Git to omit any checking and assume it has **not** changed. When you make changes to working tree files, you have to explicitly tell Git about it by dropping "assume unchanged" bit, either before or after you modify them. In order to set "assume unchanged" bit, use `--assume-unchanged` option. To unset, use `--no-assume-unchanged`. To see which files have the "assume unchanged" bit set, use `git ls-files -v` (see [git-ls-files[1]](git-ls-files)). The command looks at `core.ignorestat` configuration variable. When this is true, paths updated with `git update-index paths...` and paths updated with other Git commands that update both index and working tree (e.g. `git apply --index`, `git checkout-index -u`, and `git read-tree -u`) are automatically marked as "assume unchanged". Note that "assume unchanged" bit is **not** set if `git update-index --refresh` finds the working tree file matches the index (use `git update-index --really-refresh` if you want to mark them as "assume unchanged"). Sometimes users confuse the assume-unchanged bit with the skip-worktree bit. See the final paragraph in the "Skip-worktree bit" section below for an explanation of the differences. Examples -------- To update and refresh only the files already checked out: ``` $ git checkout-index -n -f -a && git update-index --ignore-missing --refresh ``` On an inefficient filesystem with `core.ignorestat` set ``` $ git update-index --really-refresh (1) $ git update-index --no-assume-unchanged foo.c (2) $ git diff --name-only (3) $ edit foo.c $ git diff --name-only (4) M foo.c $ git update-index foo.c (5) $ git diff --name-only (6) $ edit foo.c $ git diff --name-only (7) $ git update-index --no-assume-unchanged foo.c (8) $ git diff --name-only (9) M foo.c ``` 1. forces lstat(2) to set "assume unchanged" bits for paths that match index. 2. mark the path to be edited. 3. this does lstat(2) and finds index matches the path. 4. this does lstat(2) and finds index does **not** match the path. 5. registering the new version to index sets "assume unchanged" bit. 6. and it is assumed unchanged. 7. even after you edit it. 8. you can tell about the change after the fact. 9. now it checks with lstat(2) and finds it has been changed. Skip-worktree bit ----------------- Skip-worktree bit can be defined in one (long) sentence: Tell git to avoid writing the file to the working directory when reasonably possible, and treat the file as unchanged when it is not present in the working directory. Note that not all git commands will pay attention to this bit, and some only partially support it. The update-index flags and the read-tree capabilities relating to the skip-worktree bit predated the introduction of the [git-sparse-checkout[1]](git-sparse-checkout) command, which provides a much easier way to configure and handle the skip-worktree bits. If you want to reduce your working tree to only deal with a subset of the files in the repository, we strongly encourage the use of [git-sparse-checkout[1]](git-sparse-checkout) in preference to the low-level update-index and read-tree primitives. The primary purpose of the skip-worktree bit is to enable sparse checkouts, i.e. to have working directories with only a subset of paths present. When the skip-worktree bit is set, Git commands (such as `switch`, `pull`, `merge`) will avoid writing these files. However, these commands will sometimes write these files anyway in important cases such as conflicts during a merge or rebase. Git commands will also avoid treating the lack of such files as an intentional deletion; for example `git add -u` will not stage a deletion for these files and `git commit -a` will not make a commit deleting them either. Although this bit looks similar to assume-unchanged bit, its goal is different. The assume-unchanged bit is for leaving the file in the working tree but having Git omit checking it for changes and presuming that the file has not been changed (though if it can determine without stat’ing the file that it has changed, it is free to record the changes). skip-worktree tells Git to ignore the absence of the file, avoid updating it when possible with commands that normally update much of the working directory (e.g. `checkout`, `switch`, `pull`, etc.), and not have its absence be recorded in commits. Note that in sparse checkouts (setup by `git sparse-checkout` or by configuring core.sparseCheckout to true), if a file is marked as skip-worktree in the index but is found in the working tree, Git will clear the skip-worktree bit for that file. Split index ----------- This mode is designed for repositories with very large indexes, and aims at reducing the time it takes to repeatedly write these indexes. In this mode, the index is split into two files, $GIT\_DIR/index and $GIT\_DIR/sharedindex.<SHA-1>. Changes are accumulated in $GIT\_DIR/index, the split index, while the shared index file contains all index entries and stays unchanged. All changes in the split index are pushed back to the shared index file when the number of entries in the split index reaches a level specified by the splitIndex.maxPercentChange config variable (see [git-config[1]](git-config)). Each time a new shared index file is created, the old shared index files are deleted if their modification time is older than what is specified by the splitIndex.sharedIndexExpire config variable (see [git-config[1]](git-config)). To avoid deleting a shared index file that is still used, its modification time is updated to the current time every time a new split index based on the shared index file is either created or read from. Untracked cache --------------- This cache is meant to speed up commands that involve determining untracked files such as `git status`. This feature works by recording the mtime of the working tree directories and then omitting reading directories and stat calls against files in those directories whose mtime hasn’t changed. For this to work the underlying operating system and file system must change the `st_mtime` field of directories if files in the directory are added, modified or deleted. You can test whether the filesystem supports that with the `--test-untracked-cache` option. The `--untracked-cache` option used to implicitly perform that test in older versions of Git, but that’s no longer the case. If you want to enable (or disable) this feature, it is easier to use the `core.untrackedCache` configuration variable (see [git-config[1]](git-config)) than using the `--untracked-cache` option to `git update-index` in each repository, especially if you want to do so across all repositories you use, because you can set the configuration variable to `true` (or `false`) in your `$HOME/.gitconfig` just once and have it affect all repositories you touch. When the `core.untrackedCache` configuration variable is changed, the untracked cache is added to or removed from the index the next time a command reads the index; while when `--[no-|force-]untracked-cache` are used, the untracked cache is immediately added to or removed from the index. Before 2.17, the untracked cache had a bug where replacing a directory with a symlink to another directory could cause it to incorrectly show files tracked by git as untracked. See the "status: add a failing test showing a core.untrackedCache bug" commit to git.git. A workaround for that is (and this might work for other undiscovered bugs in the future): ``` $ git -c core.untrackedCache=false status ``` This bug has also been shown to affect non-symlink cases of replacing a directory with a file when it comes to the internal structures of the untracked cache, but no case has been reported where this resulted in wrong "git status" output. There are also cases where existing indexes written by git versions before 2.17 will reference directories that don’t exist anymore, potentially causing many "could not open directory" warnings to be printed on "git status". These are new warnings for existing issues that were previously silently discarded. As with the bug described above the solution is to one-off do a "git status" run with `core.untrackedCache=false` to flush out the leftover bad data. File system monitor ------------------- This feature is intended to speed up git operations for repos that have large working directories. It enables git to work together with a file system monitor (see [git-fsmonitor--daemon[1]](git-fsmonitor--daemon) and the "fsmonitor-watchman" section of [githooks[5]](githooks)) that can inform it as to what files have been modified. This enables git to avoid having to lstat() every file to find modified files. When used in conjunction with the untracked cache, it can further improve performance by avoiding the cost of scanning the entire working directory looking for new files. If you want to enable (or disable) this feature, it is easier to use the `core.fsmonitor` configuration variable (see [git-config[1]](git-config)) than using the `--fsmonitor` option to `git update-index` in each repository, especially if you want to do so across all repositories you use, because you can set the configuration variable in your `$HOME/.gitconfig` just once and have it affect all repositories you touch. When the `core.fsmonitor` configuration variable is changed, the file system monitor is added to or removed from the index the next time a command reads the index. When `--[no-]fsmonitor` are used, the file system monitor is immediately added to or removed from the index. Configuration ------------- The command honors `core.filemode` configuration variable. If your repository is on a filesystem whose executable bits are unreliable, this should be set to `false` (see [git-config[1]](git-config)). This causes the command to ignore differences in file modes recorded in the index and the file mode on the filesystem if they differ only on executable bit. On such an unfortunate filesystem, you may need to use `git update-index --chmod=`. Quite similarly, if `core.symlinks` configuration variable is set to `false` (see [git-config[1]](git-config)), symbolic links are checked out as plain files, and this command does not modify a recorded file mode from symbolic link to regular file. The command looks at `core.ignorestat` configuration variable. See `Using "assume unchanged" bit` section above. The command also looks at `core.trustctime` configuration variable. It can be useful when the inode change time is regularly modified by something outside Git (file system crawlers and backup systems use ctime for marking files processed) (see [git-config[1]](git-config)). The untracked cache extension can be enabled by the `core.untrackedCache` configuration variable (see [git-config[1]](git-config)). Notes ----- Users often try to use the assume-unchanged and skip-worktree bits to tell Git to ignore changes to files that are tracked. This does not work as expected, since Git may still check working tree files against the index when performing certain operations. In general, Git does not provide a way to ignore changes to tracked files, so alternate solutions are recommended. For example, if the file you want to change is some sort of config file, the repository can include a sample config file that can then be copied into the ignored name and modified. The repository can even include a script to treat the sample file as a template, modifying and copying it automatically. See also -------- [git-config[1]](git-config), [git-add[1]](git-add), [git-ls-files[1]](git-ls-files)
programming_docs
git git-stripspace git-stripspace ============== Name ---- git-stripspace - Remove unnecessary whitespace Synopsis -------- ``` git stripspace [-s | --strip-comments] git stripspace [-c | --comment-lines] ``` Description ----------- Read text, such as commit messages, notes, tags and branch descriptions, from the standard input and clean it in the manner used by Git. With no arguments, this will: * remove trailing whitespace from all lines * collapse multiple consecutive empty lines into one empty line * remove empty lines from the beginning and end of the input * add a missing `\n` to the last line if necessary. In the case where the input consists entirely of whitespace characters, no output will be produced. **NOTE**: This is intended for cleaning metadata, prefer the `--whitespace=fix` mode of [git-apply[1]](git-apply) for correcting whitespace of patches or files in the repository. Options ------- -s --strip-comments Skip and remove all lines starting with comment character (default `#`). -c --comment-lines Prepend comment character and blank to each line. Lines will automatically be terminated with a newline. On empty lines, only the comment character will be prepended. Examples -------- Given the following noisy input with `$` indicating the end of a line: ``` |A brief introduction $ | $ |$ |A new paragraph$ |# with a commented-out line $ |explaining lots of stuff.$ |$ |# An old paragraph, also commented-out. $ | $ |The end.$ | $ ``` Use `git stripspace` with no arguments to obtain: ``` |A brief introduction$ |$ |A new paragraph$ |# with a commented-out line$ |explaining lots of stuff.$ |$ |# An old paragraph, also commented-out.$ |$ |The end.$ ``` Use `git stripspace --strip-comments` to obtain: ``` |A brief introduction$ |$ |A new paragraph$ |explaining lots of stuff.$ |$ |The end.$ ``` git gitattributes gitattributes ============= Name ---- gitattributes - Defining attributes per path Synopsis -------- $GIT\_DIR/info/attributes, .gitattributes Description ----------- A `gitattributes` file is a simple text file that gives `attributes` to pathnames. Each line in `gitattributes` file is of form: ``` pattern attr1 attr2 ... ``` That is, a pattern followed by an attributes list, separated by whitespaces. Leading and trailing whitespaces are ignored. Lines that begin with `#` are ignored. Patterns that begin with a double quote are quoted in C style. When the pattern matches the path in question, the attributes listed on the line are given to the path. Each attribute can be in one of these states for a given path: Set The path has the attribute with special value "true"; this is specified by listing only the name of the attribute in the attribute list. Unset The path has the attribute with special value "false"; this is specified by listing the name of the attribute prefixed with a dash `-` in the attribute list. Set to a value The path has the attribute with specified string value; this is specified by listing the name of the attribute followed by an equal sign `=` and its value in the attribute list. Unspecified No pattern matches the path, and nothing says if the path has or does not have the attribute, the attribute for the path is said to be Unspecified. When more than one pattern matches the path, a later line overrides an earlier line. This overriding is done per attribute. The rules by which the pattern matches paths are the same as in `.gitignore` files (see [gitignore[5]](gitignore)), with a few exceptions: * negative patterns are forbidden * patterns that match a directory do not recursively match paths inside that directory (so using the trailing-slash `path/` syntax is pointless in an attributes file; use `path/**` instead) When deciding what attributes are assigned to a path, Git consults `$GIT_DIR/info/attributes` file (which has the highest precedence), `.gitattributes` file in the same directory as the path in question, and its parent directories up to the toplevel of the work tree (the further the directory that contains `.gitattributes` is from the path in question, the lower its precedence). Finally global and system-wide files are considered (they have the lowest precedence). When the `.gitattributes` file is missing from the work tree, the path in the index is used as a fall-back. During checkout process, `.gitattributes` in the index is used and then the file in the working tree is used as a fall-back. If you wish to affect only a single repository (i.e., to assign attributes to files that are particular to one user’s workflow for that repository), then attributes should be placed in the `$GIT_DIR/info/attributes` file. Attributes which should be version-controlled and distributed to other repositories (i.e., attributes of interest to all users) should go into `.gitattributes` files. Attributes that should affect all repositories for a single user should be placed in a file specified by the `core.attributesFile` configuration option (see [git-config[1]](git-config)). Its default value is $XDG\_CONFIG\_HOME/git/attributes. If $XDG\_CONFIG\_HOME is either not set or empty, $HOME/.config/git/attributes is used instead. Attributes for all users on a system should be placed in the `$(prefix)/etc/gitattributes` file. Sometimes you would need to override a setting of an attribute for a path to `Unspecified` state. This can be done by listing the name of the attribute prefixed with an exclamation point `!`. Effects ------- Certain operations by Git can be influenced by assigning particular attributes to a path. Currently, the following operations are attributes-aware. ### Checking-out and checking-in These attributes affect how the contents stored in the repository are copied to the working tree files when commands such as `git switch`, `git checkout` and `git merge` run. They also affect how Git stores the contents you prepare in the working tree in the repository upon `git add` and `git commit`. #### `text` This attribute enables and controls end-of-line normalization. When a text file is normalized, its line endings are converted to LF in the repository. To control what line ending style is used in the working directory, use the `eol` attribute for a single file and the `core.eol` configuration variable for all text files. Note that setting `core.autocrlf` to `true` or `input` overrides `core.eol` (see the definitions of those options in [git-config[1]](git-config)). Set Setting the `text` attribute on a path enables end-of-line normalization and marks the path as a text file. End-of-line conversion takes place without guessing the content type. Unset Unsetting the `text` attribute on a path tells Git not to attempt any end-of-line conversion upon checkin or checkout. Set to string value "auto" When `text` is set to "auto", the path is marked for automatic end-of-line conversion. If Git decides that the content is text, its line endings are converted to LF on checkin. When the file has been committed with CRLF, no conversion is done. Unspecified If the `text` attribute is unspecified, Git uses the `core.autocrlf` configuration variable to determine if the file should be converted. Any other value causes Git to act as if `text` has been left unspecified. #### `eol` This attribute sets a specific line-ending style to be used in the working directory. This attribute has effect only if the `text` attribute is set or unspecified, or if it is set to `auto`, the file is detected as text, and it is stored with LF endings in the index. Note that setting this attribute on paths which are in the index with CRLF line endings may make the paths to be considered dirty unless `text=auto` is set. Adding the path to the index again will normalize the line endings in the index. Set to string value "crlf" This setting forces Git to normalize line endings for this file on checkin and convert them to CRLF when the file is checked out. Set to string value "lf" This setting forces Git to normalize line endings to LF on checkin and prevents conversion to CRLF when the file is checked out. #### Backwards compatibility with `crlf` attribute For backwards compatibility, the `crlf` attribute is interpreted as follows: ``` crlf text -crlf -text crlf=input eol=lf ``` #### End-of-line conversion While Git normally leaves file contents alone, it can be configured to normalize line endings to LF in the repository and, optionally, to convert them to CRLF when files are checked out. If you simply want to have CRLF line endings in your working directory regardless of the repository you are working with, you can set the config variable "core.autocrlf" without using any attributes. ``` [core] autocrlf = true ``` This does not force normalization of text files, but does ensure that text files that you introduce to the repository have their line endings normalized to LF when they are added, and that files that are already normalized in the repository stay normalized. If you want to ensure that text files that any contributor introduces to the repository have their line endings normalized, you can set the `text` attribute to "auto" for `all` files. ``` * text=auto ``` The attributes allow a fine-grained control, how the line endings are converted. Here is an example that will make Git normalize .txt, .vcproj and .sh files, ensure that .vcproj files have CRLF and .sh files have LF in the working directory, and prevent .jpg files from being normalized regardless of their content. ``` * text=auto *.txt text *.vcproj text eol=crlf *.sh text eol=lf *.jpg -text ``` | | | | --- | --- | | Note | When `text=auto` conversion is enabled in a cross-platform project using push and pull to a central repository the text files containing CRLFs should be normalized. | From a clean working directory: ``` $ echo "* text=auto" >.gitattributes $ git add --renormalize . $ git status # Show files that will be normalized $ git commit -m "Introduce end-of-line normalization" ``` If any files that should not be normalized show up in `git status`, unset their `text` attribute before running `git add -u`. ``` manual.pdf -text ``` Conversely, text files that Git does not detect can have normalization enabled manually. ``` weirdchars.txt text ``` If `core.safecrlf` is set to "true" or "warn", Git verifies if the conversion is reversible for the current setting of `core.autocrlf`. For "true", Git rejects irreversible conversions; for "warn", Git only prints a warning but accepts an irreversible conversion. The safety triggers to prevent such a conversion done to the files in the work tree, but there are a few exceptions. Even though…​ * `git add` itself does not touch the files in the work tree, the next checkout would, so the safety triggers; * `git apply` to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; * `git diff` itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next `git add`. To catch potential problems early, safety triggers. #### `working-tree-encoding` Git recognizes files encoded in ASCII or one of its supersets (e.g. UTF-8, ISO-8859-1, …​) as text files. Files encoded in certain other encodings (e.g. UTF-16) are interpreted as binary and consequently built-in Git text processing tools (e.g. `git diff`) as well as most Git web front ends do not visualize the contents of these files by default. In these cases you can tell Git the encoding of a file in the working directory with the `working-tree-encoding` attribute. If a file with this attribute is added to Git, then Git re-encodes the content from the specified encoding to UTF-8. Finally, Git stores the UTF-8 encoded content in its internal data structure (called "the index"). On checkout the content is re-encoded back to the specified encoding. Please note that using the `working-tree-encoding` attribute may have a number of pitfalls: * Alternative Git implementations (e.g. JGit or libgit2) and older Git versions (as of March 2018) do not support the `working-tree-encoding` attribute. If you decide to use the `working-tree-encoding` attribute in your repository, then it is strongly recommended to ensure that all clients working with the repository support it. For example, Microsoft Visual Studio resources files (`*.rc`) or PowerShell script files (`*.ps1`) are sometimes encoded in UTF-16. If you declare `*.ps1` as files as UTF-16 and you add `foo.ps1` with a `working-tree-encoding` enabled Git client, then `foo.ps1` will be stored as UTF-8 internally. A client without `working-tree-encoding` support will checkout `foo.ps1` as UTF-8 encoded file. This will typically cause trouble for the users of this file. If a Git client that does not support the `working-tree-encoding` attribute adds a new file `bar.ps1`, then `bar.ps1` will be stored "as-is" internally (in this example probably as UTF-16). A client with `working-tree-encoding` support will interpret the internal contents as UTF-8 and try to convert it to UTF-16 on checkout. That operation will fail and cause an error. * Reencoding content to non-UTF encodings can cause errors as the conversion might not be UTF-8 round trip safe. If you suspect your encoding to not be round trip safe, then add it to `core.checkRoundtripEncoding` to make Git check the round trip encoding (see [git-config[1]](git-config)). SHIFT-JIS (Japanese character set) is known to have round trip issues with UTF-8 and is checked by default. * Reencoding content requires resources that might slow down certain Git operations (e.g `git checkout` or `git add`). Use the `working-tree-encoding` attribute only if you cannot store a file in UTF-8 encoding and if you want Git to be able to process the content as text. As an example, use the following attributes if your `*.ps1` files are UTF-16 encoded with byte order mark (BOM) and you want Git to perform automatic line ending conversion based on your platform. ``` *.ps1 text working-tree-encoding=UTF-16 ``` Use the following attributes if your `*.ps1` files are UTF-16 little endian encoded without BOM and you want Git to use Windows line endings in the working directory (use `UTF-16LE-BOM` instead of `UTF-16LE` if you want UTF-16 little endian with BOM). Please note, it is highly recommended to explicitly define the line endings with `eol` if the `working-tree-encoding` attribute is used to avoid ambiguity. ``` *.ps1 text working-tree-encoding=UTF-16LE eol=CRLF ``` You can get a list of all available encodings on your platform with the following command: ``` iconv --list ``` If you do not know the encoding of a file, then you can use the `file` command to guess the encoding: ``` file foo.ps1 ``` #### `ident` When the attribute `ident` is set for a path, Git replaces `$Id$` in the blob object with `$Id:`, followed by the 40-character hexadecimal blob object name, followed by a dollar sign `$` upon checkout. Any byte sequence that begins with `$Id:` and ends with `$` in the worktree file is replaced with `$Id$` upon check-in. #### `filter` A `filter` attribute can be set to a string value that names a filter driver specified in the configuration. A filter driver consists of a `clean` command and a `smudge` command, either of which can be left unspecified. Upon checkout, when the `smudge` command is specified, the command is fed the blob object from its standard input, and its standard output is used to update the worktree file. Similarly, the `clean` command is used to convert the contents of worktree file upon checkin. By default these commands process only a single blob and terminate. If a long running `process` filter is used in place of `clean` and/or `smudge` filters, then Git can process all blobs with a single filter command invocation for the entire life of a single Git command, for example `git add --all`. If a long running `process` filter is configured then it always takes precedence over a configured single blob filter. See section below for the description of the protocol used to communicate with a `process` filter. One use of the content filtering is to massage the content into a shape that is more convenient for the platform, filesystem, and the user to use. For this mode of operation, the key phrase here is "more convenient" and not "turning something unusable into usable". In other words, the intent is that if someone unsets the filter driver definition, or does not have the appropriate filter program, the project should still be usable. Another use of the content filtering is to store the content that cannot be directly used in the repository (e.g. a UUID that refers to the true content stored outside Git, or an encrypted content) and turn it into a usable form upon checkout (e.g. download the external content, or decrypt the encrypted content). These two filters behave differently, and by default, a filter is taken as the former, massaging the contents into more convenient shape. A missing filter driver definition in the config, or a filter driver that exits with a non-zero status, is not an error but makes the filter a no-op passthru. You can declare that a filter turns a content that by itself is unusable into a usable content by setting the filter.<driver>.required configuration variable to `true`. Note: Whenever the clean filter is changed, the repo should be renormalized: $ git add --renormalize . For example, in .gitattributes, you would assign the `filter` attribute for paths. ``` *.c filter=indent ``` Then you would define a "filter.indent.clean" and "filter.indent.smudge" configuration in your .git/config to specify a pair of commands to modify the contents of C programs when the source files are checked in ("clean" is run) and checked out (no change is made because the command is "cat"). ``` [filter "indent"] clean = indent smudge = cat ``` For best results, `clean` should not alter its output further if it is run twice ("clean→clean" should be equivalent to "clean"), and multiple `smudge` commands should not alter `clean`'s output ("smudge→smudge→clean" should be equivalent to "clean"). See the section on merging below. The "indent" filter is well-behaved in this regard: it will not modify input that is already correctly indented. In this case, the lack of a smudge filter means that the clean filter `must` accept its own output without modifying it. If a filter `must` succeed in order to make the stored contents usable, you can declare that the filter is `required`, in the configuration: ``` [filter "crypt"] clean = openssl enc ... smudge = openssl enc -d ... required ``` Sequence "%f" on the filter command line is replaced with the name of the file the filter is working on. A filter might use this in keyword substitution. For example: ``` [filter "p4"] clean = git-p4-filter --clean %f smudge = git-p4-filter --smudge %f ``` Note that "%f" is the name of the path that is being worked on. Depending on the version that is being filtered, the corresponding file on disk may not exist, or may have different contents. So, smudge and clean commands should not try to access the file on disk, but only act as filters on the content provided to them on standard input. #### Long Running Filter Process If the filter command (a string value) is defined via `filter.<driver>.process` then Git can process all blobs with a single filter invocation for the entire life of a single Git command. This is achieved by using the long-running process protocol (described in technical/long-running-process-protocol.txt). When Git encounters the first file that needs to be cleaned or smudged, it starts the filter and performs the handshake. In the handshake, the welcome message sent by Git is "git-filter-client", only version 2 is supported, and the supported capabilities are "clean", "smudge", and "delay". Afterwards Git sends a list of "key=value" pairs terminated with a flush packet. The list will contain at least the filter command (based on the supported capabilities) and the pathname of the file to filter relative to the repository root. Right after the flush packet Git sends the content split in zero or more pkt-line packets and a flush packet to terminate content. Please note, that the filter must not send any response before it received the content and the final flush packet. Also note that the "value" of a "key=value" pair can contain the "=" character whereas the key would never contain that character. ``` packet: git> command=smudge packet: git> pathname=path/testfile.dat packet: git> 0000 packet: git> CONTENT packet: git> 0000 ``` The filter is expected to respond with a list of "key=value" pairs terminated with a flush packet. If the filter does not experience problems then the list must contain a "success" status. Right after these packets the filter is expected to send the content in zero or more pkt-line packets and a flush packet at the end. Finally, a second list of "key=value" pairs terminated with a flush packet is expected. The filter can change the status in the second list or keep the status as is with an empty list. Please note that the empty list must be terminated with a flush packet regardless. ``` packet: git< status=success packet: git< 0000 packet: git< SMUDGED_CONTENT packet: git< 0000 packet: git< 0000 # empty list, keep "status=success" unchanged! ``` If the result content is empty then the filter is expected to respond with a "success" status and a flush packet to signal the empty content. ``` packet: git< status=success packet: git< 0000 packet: git< 0000 # empty content! packet: git< 0000 # empty list, keep "status=success" unchanged! ``` In case the filter cannot or does not want to process the content, it is expected to respond with an "error" status. ``` packet: git< status=error packet: git< 0000 ``` If the filter experiences an error during processing, then it can send the status "error" after the content was (partially or completely) sent. ``` packet: git< status=success packet: git< 0000 packet: git< HALF_WRITTEN_ERRONEOUS_CONTENT packet: git< 0000 packet: git< status=error packet: git< 0000 ``` In case the filter cannot or does not want to process the content as well as any future content for the lifetime of the Git process, then it is expected to respond with an "abort" status at any point in the protocol. ``` packet: git< status=abort packet: git< 0000 ``` Git neither stops nor restarts the filter process in case the "error"/"abort" status is set. However, Git sets its exit code according to the `filter.<driver>.required` flag, mimicking the behavior of the `filter.<driver>.clean` / `filter.<driver>.smudge` mechanism. If the filter dies during the communication or does not adhere to the protocol then Git will stop the filter process and restart it with the next file that needs to be processed. Depending on the `filter.<driver>.required` flag Git will interpret that as error. #### Delay If the filter supports the "delay" capability, then Git can send the flag "can-delay" after the filter command and pathname. This flag denotes that the filter can delay filtering the current blob (e.g. to compensate network latencies) by responding with no content but with the status "delayed" and a flush packet. ``` packet: git> command=smudge packet: git> pathname=path/testfile.dat packet: git> can-delay=1 packet: git> 0000 packet: git> CONTENT packet: git> 0000 packet: git< status=delayed packet: git< 0000 ``` If the filter supports the "delay" capability then it must support the "list\_available\_blobs" command. If Git sends this command, then the filter is expected to return a list of pathnames representing blobs that have been delayed earlier and are now available. The list must be terminated with a flush packet followed by a "success" status that is also terminated with a flush packet. If no blobs for the delayed paths are available, yet, then the filter is expected to block the response until at least one blob becomes available. The filter can tell Git that it has no more delayed blobs by sending an empty list. As soon as the filter responds with an empty list, Git stops asking. All blobs that Git has not received at this point are considered missing and will result in an error. ``` packet: git> command=list_available_blobs packet: git> 0000 packet: git< pathname=path/testfile.dat packet: git< pathname=path/otherfile.dat packet: git< 0000 packet: git< status=success packet: git< 0000 ``` After Git received the pathnames, it will request the corresponding blobs again. These requests contain a pathname and an empty content section. The filter is expected to respond with the smudged content in the usual way as explained above. ``` packet: git> command=smudge packet: git> pathname=path/testfile.dat packet: git> 0000 packet: git> 0000 # empty content! packet: git< status=success packet: git< 0000 packet: git< SMUDGED_CONTENT packet: git< 0000 packet: git< 0000 # empty list, keep "status=success" unchanged! ``` #### Example A long running filter demo implementation can be found in `contrib/long-running-filter/example.pl` located in the Git core repository. If you develop your own long running filter process then the `GIT_TRACE_PACKET` environment variables can be very helpful for debugging (see [git[1]](git)). Please note that you cannot use an existing `filter.<driver>.clean` or `filter.<driver>.smudge` command with `filter.<driver>.process` because the former two use a different inter process communication protocol than the latter one. #### Interaction between checkin/checkout attributes In the check-in codepath, the worktree file is first converted with `filter` driver (if specified and corresponding driver defined), then the result is processed with `ident` (if specified), and then finally with `text` (again, if specified and applicable). In the check-out codepath, the blob content is first converted with `text`, and then `ident` and fed to `filter`. #### Merging branches with differing checkin/checkout attributes If you have added attributes to a file that cause the canonical repository format for that file to change, such as adding a clean/smudge filter or text/eol/ident attributes, merging anything where the attribute is not in place would normally cause merge conflicts. To prevent these unnecessary merge conflicts, Git can be told to run a virtual check-out and check-in of all three stages of a file when resolving a three-way merge by setting the `merge.renormalize` configuration variable. This prevents changes caused by check-in conversion from causing spurious merge conflicts when a converted file is merged with an unconverted file. As long as a "smudge→clean" results in the same output as a "clean" even on files that are already smudged, this strategy will automatically resolve all filter-related conflicts. Filters that do not act in this way may cause additional merge conflicts that must be resolved manually. ### Generating diff text #### `diff` The attribute `diff` affects how Git generates diffs for particular files. It can tell Git whether to generate a textual patch for the path or to treat the path as a binary file. It can also affect what line is shown on the hunk header `@@ -k,l +n,m @@` line, tell Git to use an external command to generate the diff, or ask Git to convert binary files to a text format before generating the diff. Set A path to which the `diff` attribute is set is treated as text, even when they contain byte values that normally never appear in text files, such as NUL. Unset A path to which the `diff` attribute is unset will generate `Binary files differ` (or a binary patch, if binary patches are enabled). Unspecified A path to which the `diff` attribute is unspecified first gets its contents inspected, and if it looks like text and is smaller than core.bigFileThreshold, it is treated as text. Otherwise it would generate `Binary files differ`. String Diff is shown using the specified diff driver. Each driver may specify one or more options, as described in the following section. The options for the diff driver "foo" are defined by the configuration variables in the "diff.foo" section of the Git config file. #### Defining an external diff driver The definition of a diff driver is done in `gitconfig`, not `gitattributes` file, so strictly speaking this manual page is a wrong place to talk about it. However…​ To define an external diff driver `jcdiff`, add a section to your `$GIT_DIR/config` file (or `$HOME/.gitconfig` file) like this: ``` [diff "jcdiff"] command = j-c-diff ``` When Git needs to show you a diff for the path with `diff` attribute set to `jcdiff`, it calls the command you specified with the above configuration, i.e. `j-c-diff`, with 7 parameters, just like `GIT_EXTERNAL_DIFF` program is called. See [git[1]](git) for details. #### Defining a custom hunk-header Each group of changes (called a "hunk") in the textual diff output is prefixed with a line of the form: ``` @@ -k,l +n,m @@ TEXT ``` This is called a `hunk header`. The "TEXT" portion is by default a line that begins with an alphabet, an underscore or a dollar sign; this matches what GNU `diff -p` output uses. This default selection however is not suited for some contents, and you can use a customized pattern to make a selection. First, in .gitattributes, you would assign the `diff` attribute for paths. ``` *.tex diff=tex ``` Then, you would define a "diff.tex.xfuncname" configuration to specify a regular expression that matches a line that you would want to appear as the hunk header "TEXT". Add a section to your `$GIT_DIR/config` file (or `$HOME/.gitconfig` file) like this: ``` [diff "tex"] xfuncname = "^(\\\\(sub)*section\\{.*)$" ``` Note. A single level of backslashes are eaten by the configuration file parser, so you would need to double the backslashes; the pattern above picks a line that begins with a backslash, and zero or more occurrences of `sub` followed by `section` followed by open brace, to the end of line. There are a few built-in patterns to make this easier, and `tex` is one of them, so you do not have to write the above in your configuration file (you still need to enable this with the attribute mechanism, via `.gitattributes`). The following built in patterns are available: * `ada` suitable for source code in the Ada language. * `bash` suitable for source code in the Bourne-Again SHell language. Covers a superset of POSIX shell function definitions. * `bibtex` suitable for files with BibTeX coded references. * `cpp` suitable for source code in the C and C++ languages. * `csharp` suitable for source code in the C# language. * `css` suitable for cascading style sheets. * `dts` suitable for devicetree (DTS) files. * `elixir` suitable for source code in the Elixir language. * `fortran` suitable for source code in the Fortran language. * `fountain` suitable for Fountain documents. * `golang` suitable for source code in the Go language. * `html` suitable for HTML/XHTML documents. * `java` suitable for source code in the Java language. * `kotlin` suitable for source code in the Kotlin language. * `markdown` suitable for Markdown documents. * `matlab` suitable for source code in the MATLAB and Octave languages. * `objc` suitable for source code in the Objective-C language. * `pascal` suitable for source code in the Pascal/Delphi language. * `perl` suitable for source code in the Perl language. * `php` suitable for source code in the PHP language. * `python` suitable for source code in the Python language. * `ruby` suitable for source code in the Ruby language. * `rust` suitable for source code in the Rust language. * `scheme` suitable for source code in the Scheme language. * `tex` suitable for source code for LaTeX documents. #### Customizing word diff You can customize the rules that `git diff --word-diff` uses to split words in a line, by specifying an appropriate regular expression in the "diff.\*.wordRegex" configuration variable. For example, in TeX a backslash followed by a sequence of letters forms a command, but several such commands can be run together without intervening whitespace. To separate them, use a regular expression in your `$GIT_DIR/config` file (or `$HOME/.gitconfig` file) like this: ``` [diff "tex"] wordRegex = "\\\\[a-zA-Z]+|[{}]|\\\\.|[^\\{}[:space:]]+" ``` A built-in pattern is provided for all languages listed in the previous section. #### Performing text diffs of binary files Sometimes it is desirable to see the diff of a text-converted version of some binary files. For example, a word processor document can be converted to an ASCII text representation, and the diff of the text shown. Even though this conversion loses some information, the resulting diff is useful for human viewing (but cannot be applied directly). The `textconv` config option is used to define a program for performing such a conversion. The program should take a single argument, the name of a file to convert, and produce the resulting text on stdout. For example, to show the diff of the exif information of a file instead of the binary information (assuming you have the exif tool installed), add the following section to your `$GIT_DIR/config` file (or `$HOME/.gitconfig` file): ``` [diff "jpg"] textconv = exif ``` | | | | --- | --- | | Note | The text conversion is generally a one-way conversion; in this example, we lose the actual image contents and focus just on the text data. This means that diffs generated by textconv are *not* suitable for applying. For this reason, only `git diff` and the `git log` family of commands (i.e., log, whatchanged, show) will perform text conversion. `git format-patch` will never generate this output. If you want to send somebody a text-converted diff of a binary file (e.g., because it quickly conveys the changes you have made), you should generate it separately and send it as a comment *in addition to* the usual binary diff that you might send. | Because text conversion can be slow, especially when doing a large number of them with `git log -p`, Git provides a mechanism to cache the output and use it in future diffs. To enable caching, set the "cachetextconv" variable in your diff driver’s config. For example: ``` [diff "jpg"] textconv = exif cachetextconv = true ``` This will cache the result of running "exif" on each blob indefinitely. If you change the textconv config variable for a diff driver, Git will automatically invalidate the cache entries and re-run the textconv filter. If you want to invalidate the cache manually (e.g., because your version of "exif" was updated and now produces better output), you can remove the cache manually with `git update-ref -d refs/notes/textconv/jpg` (where "jpg" is the name of the diff driver, as in the example above). #### Choosing textconv versus external diff If you want to show differences between binary or specially-formatted blobs in your repository, you can choose to use either an external diff command, or to use textconv to convert them to a diff-able text format. Which method you choose depends on your exact situation. The advantage of using an external diff command is flexibility. You are not bound to find line-oriented changes, nor is it necessary for the output to resemble unified diff. You are free to locate and report changes in the most appropriate way for your data format. A textconv, by comparison, is much more limiting. You provide a transformation of the data into a line-oriented text format, and Git uses its regular diff tools to generate the output. There are several advantages to choosing this method: 1. Ease of use. It is often much simpler to write a binary to text transformation than it is to perform your own diff. In many cases, existing programs can be used as textconv filters (e.g., exif, odt2txt). 2. Git diff features. By performing only the transformation step yourself, you can still utilize many of Git’s diff features, including colorization, word-diff, and combined diffs for merges. 3. Caching. Textconv caching can speed up repeated diffs, such as those you might trigger by running `git log -p`. #### Marking files as binary Git usually guesses correctly whether a blob contains text or binary data by examining the beginning of the contents. However, sometimes you may want to override its decision, either because a blob contains binary data later in the file, or because the content, while technically composed of text characters, is opaque to a human reader. For example, many postscript files contain only ASCII characters, but produce noisy and meaningless diffs. The simplest way to mark a file as binary is to unset the diff attribute in the `.gitattributes` file: ``` *.ps -diff ``` This will cause Git to generate `Binary files differ` (or a binary patch, if binary patches are enabled) instead of a regular diff. However, one may also want to specify other diff driver attributes. For example, you might want to use `textconv` to convert postscript files to an ASCII representation for human viewing, but otherwise treat them as binary files. You cannot specify both `-diff` and `diff=ps` attributes. The solution is to use the `diff.*.binary` config option: ``` [diff "ps"] textconv = ps2ascii binary = true ``` ### Performing a three-way merge #### `merge` The attribute `merge` affects how three versions of a file are merged when a file-level merge is necessary during `git merge`, and other commands such as `git revert` and `git cherry-pick`. Set Built-in 3-way merge driver is used to merge the contents in a way similar to `merge` command of `RCS` suite. This is suitable for ordinary text files. Unset Take the version from the current branch as the tentative merge result, and declare that the merge has conflicts. This is suitable for binary files that do not have a well-defined merge semantics. Unspecified By default, this uses the same built-in 3-way merge driver as is the case when the `merge` attribute is set. However, the `merge.default` configuration variable can name different merge driver to be used with paths for which the `merge` attribute is unspecified. String 3-way merge is performed using the specified custom merge driver. The built-in 3-way merge driver can be explicitly specified by asking for "text" driver; the built-in "take the current branch" driver can be requested with "binary". #### Built-in merge drivers There are a few built-in low-level merge drivers defined that can be asked for via the `merge` attribute. text Usual 3-way file level merge for text files. Conflicted regions are marked with conflict markers `<<<<<<<`, `=======` and `>>>>>>>`. The version from your branch appears before the `=======` marker, and the version from the merged branch appears after the `=======` marker. binary Keep the version from your branch in the work tree, but leave the path in the conflicted state for the user to sort out. union Run 3-way file level merge for text files, but take lines from both versions, instead of leaving conflict markers. This tends to leave the added lines in the resulting file in random order and the user should verify the result. Do not use this if you do not understand the implications. #### Defining a custom merge driver The definition of a merge driver is done in the `.git/config` file, not in the `gitattributes` file, so strictly speaking this manual page is a wrong place to talk about it. However…​ To define a custom merge driver `filfre`, add a section to your `$GIT_DIR/config` file (or `$HOME/.gitconfig` file) like this: ``` [merge "filfre"] name = feel-free merge driver driver = filfre %O %A %B %L %P recursive = binary ``` The `merge.*.name` variable gives the driver a human-readable name. The `merge.*.driver` variable’s value is used to construct a command to run to merge ancestor’s version (`%O`), current version (`%A`) and the other branches' version (`%B`). These three tokens are replaced with the names of temporary files that hold the contents of these versions when the command line is built. Additionally, %L will be replaced with the conflict marker size (see below). The merge driver is expected to leave the result of the merge in the file named with `%A` by overwriting it, and exit with zero status if it managed to merge them cleanly, or non-zero if there were conflicts. The `merge.*.recursive` variable specifies what other merge driver to use when the merge driver is called for an internal merge between common ancestors, when there are more than one. When left unspecified, the driver itself is used for both internal merge and the final merge. The merge driver can learn the pathname in which the merged result will be stored via placeholder `%P`. #### `conflict-marker-size` This attribute controls the length of conflict markers left in the work tree file during a conflicted merge. Only setting to the value to a positive integer has any meaningful effect. For example, this line in `.gitattributes` can be used to tell the merge machinery to leave much longer (instead of the usual 7-character-long) conflict markers when merging the file `Documentation/git-merge.txt` results in a conflict. ``` Documentation/git-merge.txt conflict-marker-size=32 ``` ### Checking whitespace errors #### `whitespace` The `core.whitespace` configuration variable allows you to define what `diff` and `apply` should consider whitespace errors for all paths in the project (See [git-config[1]](git-config)). This attribute gives you finer control per path. Set Notice all types of potential whitespace errors known to Git. The tab width is taken from the value of the `core.whitespace` configuration variable. Unset Do not notice anything as error. Unspecified Use the value of the `core.whitespace` configuration variable to decide what to notice as error. String Specify a comma separate list of common whitespace problems to notice in the same format as the `core.whitespace` configuration variable. ### Creating an archive #### `export-ignore` Files and directories with the attribute `export-ignore` won’t be added to archive files. #### `export-subst` If the attribute `export-subst` is set for a file then Git will expand several placeholders when adding this file to an archive. The expansion depends on the availability of a commit ID, i.e., if [git-archive[1]](git-archive) has been given a tree instead of a commit or a tag then no replacement will be done. The placeholders are the same as those for the option `--pretty=format:` of [git-log[1]](git-log), except that they need to be wrapped like this: `$Format:PLACEHOLDERS$` in the file. E.g. the string `$Format:%H$` will be replaced by the commit hash. However, only one `%(describe)` placeholder is expanded per archive to avoid denial-of-service attacks. ### Packing objects #### `delta` Delta compression will not be attempted for blobs for paths with the attribute `delta` set to false. ### Viewing files in GUI tools #### `encoding` The value of this attribute specifies the character encoding that should be used by GUI tools (e.g. [gitk[1]](gitk) and [git-gui[1]](git-gui)) to display the contents of the relevant file. Note that due to performance considerations [gitk[1]](gitk) does not use this attribute unless you manually enable per-file encodings in its options. If this attribute is not set or has an invalid value, the value of the `gui.encoding` configuration variable is used instead (See [git-config[1]](git-config)). Using macro attributes ---------------------- You do not want any end-of-line conversions applied to, nor textual diffs produced for, any binary file you track. You would need to specify e.g. ``` *.jpg -text -diff ``` but that may become cumbersome, when you have many attributes. Using macro attributes, you can define an attribute that, when set, also sets or unsets a number of other attributes at the same time. The system knows a built-in macro attribute, `binary`: ``` *.jpg binary ``` Setting the "binary" attribute also unsets the "text" and "diff" attributes as above. Note that macro attributes can only be "Set", though setting one might have the effect of setting or unsetting other attributes or even returning other attributes to the "Unspecified" state. Defining macro attributes ------------------------- Custom macro attributes can be defined only in top-level gitattributes files (`$GIT_DIR/info/attributes`, the `.gitattributes` file at the top level of the working tree, or the global or system-wide gitattributes files), not in `.gitattributes` files in working tree subdirectories. The built-in macro attribute "binary" is equivalent to: ``` [attr]binary -diff -merge -text ``` Notes ----- Git does not follow symbolic links when accessing a `.gitattributes` file in the working tree. This keeps behavior consistent when the file is accessed from the index or a tree versus from the filesystem. Examples -------- If you have these three `gitattributes` file: ``` (in $GIT_DIR/info/attributes) a* foo !bar -baz (in .gitattributes) abc foo bar baz (in t/.gitattributes) ab* merge=filfre abc -foo -bar *.c frotz ``` the attributes given to path `t/abc` are computed as follows: 1. By examining `t/.gitattributes` (which is in the same directory as the path in question), Git finds that the first line matches. `merge` attribute is set. It also finds that the second line matches, and attributes `foo` and `bar` are unset. 2. Then it examines `.gitattributes` (which is in the parent directory), and finds that the first line matches, but `t/.gitattributes` file already decided how `merge`, `foo` and `bar` attributes should be given to this path, so it leaves `foo` and `bar` unset. Attribute `baz` is set. 3. Finally it examines `$GIT_DIR/info/attributes`. This file is used to override the in-tree settings. The first line is a match, and `foo` is set, `bar` is reverted to unspecified state, and `baz` is unset. As the result, the attributes assignment to `t/abc` becomes: ``` foo set to true bar unspecified baz set to false merge set to string value "filfre" frotz unspecified ``` See also -------- [git-check-attr[1]](git-check-attr).
programming_docs
git git-clean git-clean ========= Name ---- git-clean - Remove untracked files from the working tree Synopsis -------- ``` git clean [-d] [-f] [-i] [-n] [-q] [-e <pattern>] [-x | -X] [--] [<pathspec>…​] ``` Description ----------- Cleans the working tree by recursively removing files that are not under version control, starting from the current directory. Normally, only files unknown to Git are removed, but if the `-x` option is specified, ignored files are also removed. This can, for example, be useful to remove all build products. If any optional `<pathspec>...` arguments are given, only those paths that match the pathspec are affected. Options ------- -d Normally, when no <pathspec> is specified, git clean will not recurse into untracked directories to avoid removing too much. Specify -d to have it recurse into such directories as well. If a <pathspec> is specified, -d is irrelevant; all untracked files matching the specified paths (with exceptions for nested git directories mentioned under `--force`) will be removed. -f --force If the Git configuration variable clean.requireForce is not set to false, `git clean` will refuse to delete files or directories unless given -f or -i. Git will refuse to modify untracked nested git repositories (directories with a .git subdirectory) unless a second -f is given. -i --interactive Show what would be done and clean files interactively. See “Interactive mode” for details. -n --dry-run Don’t actually remove anything, just show what would be done. -q --quiet Be quiet, only report errors, but not the files that are successfully removed. -e <pattern> --exclude=<pattern> Use the given exclude pattern in addition to the standard ignore rules (see [gitignore[5]](gitignore)). -x Don’t use the standard ignore rules (see [gitignore[5]](gitignore)), but still use the ignore rules given with `-e` options from the command line. This allows removing all untracked files, including build products. This can be used (possibly in conjunction with `git restore` or `git reset`) to create a pristine working directory to test a clean build. -X Remove only files ignored by Git. This may be useful to rebuild everything from scratch, but keep manually created files. Interactive mode ---------------- When the command enters the interactive mode, it shows the files and directories to be cleaned, and goes into its interactive command loop. The command loop shows the list of subcommands available, and gives a prompt "What now> ". In general, when the prompt ends with a single `>`, you can pick only one of the choices given and type return, like this: ``` *** Commands *** 1: clean 2: filter by pattern 3: select by numbers 4: ask each 5: quit 6: help What now> 1 ``` You also could say `c` or `clean` above as long as the choice is unique. The main command loop has 6 subcommands. clean Start cleaning files and directories, and then quit. filter by pattern This shows the files and directories to be deleted and issues an "Input ignore patterns>>" prompt. You can input space-separated patterns to exclude files and directories from deletion. E.g. "\*.c \*.h" will excludes files end with ".c" and ".h" from deletion. When you are satisfied with the filtered result, press ENTER (empty) back to the main menu. select by numbers This shows the files and directories to be deleted and issues an "Select items to delete>>" prompt. When the prompt ends with double `>>` like this, you can make more than one selection, concatenated with whitespace or comma. Also you can say ranges. E.g. "2-5 7,9" to choose 2,3,4,5,7,9 from the list. If the second number in a range is omitted, all remaining items are selected. E.g. "7-" to choose 7,8,9 from the list. You can say `*` to choose everything. Also when you are satisfied with the filtered result, press ENTER (empty) back to the main menu. ask each This will start to clean, and you must confirm one by one in order to delete items. Please note that this action is not as efficient as the above two actions. quit This lets you quit without do cleaning. help Show brief usage of interactive git-clean. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: clean.requireForce A boolean to make git-clean do nothing unless given -f, -i or -n. Defaults to true. See also -------- [gitignore[5]](gitignore) git git-reset git-reset ========= Name ---- git-reset - Reset current HEAD to the specified state Synopsis -------- ``` git reset [-q] [<tree-ish>] [--] <pathspec>…​ git reset [-q] [--pathspec-from-file=<file> [--pathspec-file-nul]] [<tree-ish>] git reset (--patch | -p) [<tree-ish>] [--] [<pathspec>…​] git reset [--soft | --mixed [-N] | --hard | --merge | --keep] [-q] [<commit>] ``` Description ----------- In the first three forms, copy entries from `<tree-ish>` to the index. In the last form, set the current branch head (`HEAD`) to `<commit>`, optionally modifying index and working tree to match. The `<tree-ish>`/`<commit>` defaults to `HEAD` in all forms. *git reset* [-q] [<tree-ish>] [--] <pathspec>…​ *git reset* [-q] [--pathspec-from-file=<file> [--pathspec-file-nul]] [<tree-ish>] These forms reset the index entries for all paths that match the `<pathspec>` to their state at `<tree-ish>`. (It does not affect the working tree or the current branch.) This means that `git reset <pathspec>` is the opposite of `git add <pathspec>`. This command is equivalent to `git restore [--source=<tree-ish>] --staged <pathspec>...`. After running `git reset <pathspec>` to update the index entry, you can use [git-restore[1]](git-restore) to check the contents out of the index to the working tree. Alternatively, using [git-restore[1]](git-restore) and specifying a commit with `--source`, you can copy the contents of a path out of a commit to the index and to the working tree in one go. *git reset* (--patch | -p) [<tree-ish>] [--] [<pathspec>…​] Interactively select hunks in the difference between the index and `<tree-ish>` (defaults to `HEAD`). The chosen hunks are applied in reverse to the index. This means that `git reset -p` is the opposite of `git add -p`, i.e. you can use it to selectively reset hunks. See the “Interactive Mode” section of [git-add[1]](git-add) to learn how to operate the `--patch` mode. *git reset* [<mode>] [<commit>] This form resets the current branch head to `<commit>` and possibly updates the index (resetting it to the tree of `<commit>`) and the working tree depending on `<mode>`. If `<mode>` is omitted, defaults to `--mixed`. The `<mode>` must be one of the following: --soft Does not touch the index file or the working tree at all (but resets the head to `<commit>`, just like all modes do). This leaves all your changed files "Changes to be committed", as `git status` would put it. --mixed Resets the index but not the working tree (i.e., the changed files are preserved but not marked for commit) and reports what has not been updated. This is the default action. If `-N` is specified, removed paths are marked as intent-to-add (see [git-add[1]](git-add)). --hard Resets the index and working tree. Any changes to tracked files in the working tree since `<commit>` are discarded. Any untracked files or directories in the way of writing any tracked files are simply deleted. --merge Resets the index and updates the files in the working tree that are different between `<commit>` and `HEAD`, but keeps those which are different between the index and working tree (i.e. which have changes which have not been added). If a file that is different between `<commit>` and the index has unstaged changes, reset is aborted. In other words, `--merge` does something like a `git read-tree -u -m <commit>`, but carries forward unmerged index entries. --keep Resets index entries and updates files in the working tree that are different between `<commit>` and `HEAD`. If a file that is different between `<commit>` and `HEAD` has local changes, reset is aborted. --[no-]recurse-submodules When the working tree is updated, using --recurse-submodules will also recursively reset the working tree of all active submodules according to the commit recorded in the superproject, also setting the submodules' HEAD to be detached at that commit. See "Reset, restore and revert" in [git[1]](git) for the differences between the three commands. Options ------- -q --quiet Be quiet, only report errors. --refresh --no-refresh Refresh the index after a mixed reset. Enabled by default. --pathspec-from-file=<file> Pathspec is passed in `<file>` instead of commandline args. If `<file>` is exactly `-` then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). See also `--pathspec-file-nul` and global `--literal-pathspecs`. --pathspec-file-nul Only meaningful with `--pathspec-from-file`. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -- Do not interpret any more arguments as options. <pathspec>…​ Limits the paths affected by the operation. For more details, see the `pathspec` entry in [gitglossary[7]](gitglossary). Examples -------- Undo add ``` $ edit (1) $ git add frotz.c filfre.c $ mailx (2) $ git reset (3) $ git pull git://info.example.com/ nitfol (4) ``` 1. You are happily working on something, and find the changes in these files are in good order. You do not want to see them when you run `git diff`, because you plan to work on other files and changes with these files are distracting. 2. Somebody asks you to pull, and the changes sound worthy of merging. 3. However, you already dirtied the index (i.e. your index does not match the `HEAD` commit). But you know the pull you are going to make does not affect `frotz.c` or `filfre.c`, so you revert the index changes for these two files. Your changes in working tree remain there. 4. Then you can pull and merge, leaving `frotz.c` and `filfre.c` changes still in the working tree. Undo a commit and redo ``` $ git commit ... $ git reset --soft HEAD^ (1) $ edit (2) $ git commit -a -c ORIG_HEAD (3) ``` 1. This is most often done when you remembered what you just committed is incomplete, or you misspelled your commit message, or both. Leaves working tree as it was before "reset". 2. Make corrections to working tree files. 3. "reset" copies the old head to `.git/ORIG_HEAD`; redo the commit by starting with its log message. If you do not need to edit the message further, you can give `-C` option instead. See also the `--amend` option to [git-commit[1]](git-commit). Undo a commit, making it a topic branch ``` $ git branch topic/wip (1) $ git reset --hard HEAD~3 (2) $ git switch topic/wip (3) ``` 1. You have made some commits, but realize they were premature to be in the `master` branch. You want to continue polishing them in a topic branch, so create `topic/wip` branch off of the current `HEAD`. 2. Rewind the master branch to get rid of those three commits. 3. Switch to `topic/wip` branch and keep working. Undo commits permanently ``` $ git commit ... $ git reset --hard HEAD~3 (1) ``` 1. The last three commits (`HEAD`, `HEAD^`, and `HEAD~2`) were bad and you do not want to ever see them again. Do **not** do this if you have already given these commits to somebody else. (See the "RECOVERING FROM UPSTREAM REBASE" section in [git-rebase[1]](git-rebase) for the implications of doing so.) Undo a merge or pull ``` $ git pull (1) Auto-merging nitfol CONFLICT (content): Merge conflict in nitfol Automatic merge failed; fix conflicts and then commit the result. $ git reset --hard (2) $ git pull . topic/branch (3) Updating from 41223... to 13134... Fast-forward $ git reset --hard ORIG_HEAD (4) ``` 1. Try to update from the upstream resulted in a lot of conflicts; you were not ready to spend a lot of time merging right now, so you decide to do that later. 2. "pull" has not made merge commit, so `git reset --hard` which is a synonym for `git reset --hard HEAD` clears the mess from the index file and the working tree. 3. Merge a topic branch into the current branch, which resulted in a fast-forward. 4. But you decided that the topic branch is not ready for public consumption yet. "pull" or "merge" always leaves the original tip of the current branch in `ORIG_HEAD`, so resetting hard to it brings your index file and the working tree back to that state, and resets the tip of the branch to that commit. Undo a merge or pull inside a dirty working tree ``` $ git pull (1) Auto-merging nitfol Merge made by recursive. nitfol | 20 +++++---- ... $ git reset --merge ORIG_HEAD (2) ``` 1. Even if you may have local modifications in your working tree, you can safely say `git pull` when you know that the change in the other branch does not overlap with them. 2. After inspecting the result of the merge, you may find that the change in the other branch is unsatisfactory. Running `git reset --hard ORIG_HEAD` will let you go back to where you were, but it will discard your local changes, which you do not want. `git reset --merge` keeps your local changes. Interrupted workflow Suppose you are interrupted by an urgent fix request while you are in the middle of a large change. The files in your working tree are not in any shape to be committed yet, but you need to get to the other branch for a quick bugfix. ``` $ git switch feature ;# you were working in "feature" branch and $ work work work ;# got interrupted $ git commit -a -m "snapshot WIP" (1) $ git switch master $ fix fix fix $ git commit ;# commit with real log $ git switch feature $ git reset --soft HEAD^ ;# go back to WIP state (2) $ git reset (3) ``` 1. This commit will get blown away so a throw-away log message is OK. 2. This removes the `WIP` commit from the commit history, and sets your working tree to the state just before you made that snapshot. 3. At this point the index file still has all the WIP changes you committed as `snapshot WIP`. This updates the index to show your WIP files as uncommitted. See also [git-stash[1]](git-stash). Reset a single file in the index Suppose you have added a file to your index, but later decide you do not want to add it to your commit. You can remove the file from the index while keeping your changes with git reset. ``` $ git reset -- frotz.c (1) $ git commit -m "Commit files in index" (2) $ git add frotz.c (3) ``` 1. This removes the file from the index while keeping it in the working directory. 2. This commits all other changes in the index. 3. Adds the file to the index again. Keep changes in working tree while discarding some previous commits Suppose you are working on something and you commit it, and then you continue working a bit more, but now you think that what you have in your working tree should be in another branch that has nothing to do with what you committed previously. You can start a new branch and reset it while keeping the changes in your working tree. ``` $ git tag start $ git switch -c branch1 $ edit $ git commit ... (1) $ edit $ git switch -c branch2 (2) $ git reset --keep start (3) ``` 1. This commits your first edits in `branch1`. 2. In the ideal world, you could have realized that the earlier commit did not belong to the new topic when you created and switched to `branch2` (i.e. `git switch -c branch2 start`), but nobody is perfect. 3. But you can use `reset --keep` to remove the unwanted commit after you switched to `branch2`. Split a commit apart into a sequence of commits Suppose that you have created lots of logically separate changes and committed them together. Then, later you decide that it might be better to have each logical chunk associated with its own commit. You can use git reset to rewind history without changing the contents of your local files, and then successively use `git add -p` to interactively select which hunks to include into each commit, using `git commit -c` to pre-populate the commit message. ``` $ git reset -N HEAD^ (1) $ git add -p (2) $ git diff --cached (3) $ git commit -c HEAD@{1} (4) ... (5) $ git add ... (6) $ git diff --cached (7) $ git commit ... (8) ``` 1. First, reset the history back one commit so that we remove the original commit, but leave the working tree with all the changes. The -N ensures that any new files added with `HEAD` are still marked so that `git add -p` will find them. 2. Next, we interactively select diff hunks to add using the `git add -p` facility. This will ask you about each diff hunk in sequence and you can use simple commands such as "yes, include this", "No don’t include this" or even the very powerful "edit" facility. 3. Once satisfied with the hunks you want to include, you should verify what has been prepared for the first commit by using `git diff --cached`. This shows all the changes that have been moved into the index and are about to be committed. 4. Next, commit the changes stored in the index. The `-c` option specifies to pre-populate the commit message from the original message that you started with in the first commit. This is helpful to avoid retyping it. The `HEAD@{1}` is a special notation for the commit that `HEAD` used to be at prior to the original reset commit (1 change ago). See [git-reflog[1]](git-reflog) for more details. You may also use any other valid commit reference. 5. You can repeat steps 2-4 multiple times to break the original code into any number of commits. 6. Now you’ve split out many of the changes into their own commits, and might no longer use the patch mode of `git add`, in order to select all remaining uncommitted changes. 7. Once again, check to verify that you’ve included what you want to. You may also wish to verify that git diff doesn’t show any remaining changes to be committed later. 8. And finally create the final commit. Discussion ---------- The tables below show what happens when running: ``` git reset --option target ``` to reset the `HEAD` to another commit (`target`) with the different reset options depending on the state of the files. In these tables, `A`, `B`, `C` and `D` are some different states of a file. For example, the first line of the first table means that if a file is in state `A` in the working tree, in state `B` in the index, in state `C` in `HEAD` and in state `D` in the target, then `git reset --soft target` will leave the file in the working tree in state `A` and in the index in state `B`. It resets (i.e. moves) the `HEAD` (i.e. the tip of the current branch, if you are on one) to `target` (which has the file in state `D`). ``` working index HEAD target working index HEAD ---------------------------------------------------- A B C D --soft A B D --mixed A D D --hard D D D --merge (disallowed) --keep (disallowed) ``` ``` working index HEAD target working index HEAD ---------------------------------------------------- A B C C --soft A B C --mixed A C C --hard C C C --merge (disallowed) --keep A C C ``` ``` working index HEAD target working index HEAD ---------------------------------------------------- B B C D --soft B B D --mixed B D D --hard D D D --merge D D D --keep (disallowed) ``` ``` working index HEAD target working index HEAD ---------------------------------------------------- B B C C --soft B B C --mixed B C C --hard C C C --merge C C C --keep B C C ``` ``` working index HEAD target working index HEAD ---------------------------------------------------- B C C D --soft B C D --mixed B D D --hard D D D --merge (disallowed) --keep (disallowed) ``` ``` working index HEAD target working index HEAD ---------------------------------------------------- B C C C --soft B C C --mixed B C C --hard C C C --merge B C C --keep B C C ``` `reset --merge` is meant to be used when resetting out of a conflicted merge. Any mergy operation guarantees that the working tree file that is involved in the merge does not have a local change with respect to the index before it starts, and that it writes the result out to the working tree. So if we see some difference between the index and the target and also between the index and the working tree, then it means that we are not resetting out from a state that a mergy operation left after failing with a conflict. That is why we disallow `--merge` option in this case. `reset --keep` is meant to be used when removing some of the last commits in the current branch while keeping changes in the working tree. If there could be conflicts between the changes in the commit we want to remove and the changes in the working tree we want to keep, the reset is disallowed. That’s why it is disallowed if there are both changes between the working tree and `HEAD`, and between `HEAD` and the target. To be safe, it is also disallowed when there are unmerged entries. The following tables show what happens when there are unmerged entries: ``` working index HEAD target working index HEAD ---------------------------------------------------- X U A B --soft (disallowed) --mixed X B B --hard B B B --merge B B B --keep (disallowed) ``` ``` working index HEAD target working index HEAD ---------------------------------------------------- X U A A --soft (disallowed) --mixed X A A --hard A A A --merge A A A --keep (disallowed) ``` `X` means any state and `U` means an unmerged index.
programming_docs
git git-shortlog git-shortlog ============ Name ---- git-shortlog - Summarize `git log` output Synopsis -------- ``` git shortlog [<options>] [<revision-range>] [[--] <path>…​] git log --pretty=short | git shortlog [<options>] ``` Description ----------- Summarizes `git log` output in a format suitable for inclusion in release announcements. Each commit will be grouped by author and title. Additionally, "[PATCH]" will be stripped from the commit description. If no revisions are passed on the command line and either standard input is not a terminal or there is no current branch, `git shortlog` will output a summary of the log read from standard input, without reference to the current repository. Options ------- -n --numbered Sort output according to the number of commits per author instead of author alphabetic order. -s --summary Suppress commit description and provide a commit count summary only. -e --email Show the email address of each author. --format[=<format>] Instead of the commit subject, use some other information to describe each commit. `<format>` can be any string accepted by the `--format` option of `git log`, such as `* [%h] %s`. (See the "PRETTY FORMATS" section of [git-log[1]](git-log).) ``` Each pretty-printed commit will be rewrapped before it is shown. ``` --date=<format> Show dates formatted according to the given date string. (See the `--date` option in the "Commit Formatting" section of [git-log[1]](git-log)). Useful with `--group=format:<format>`. --group=<type> Group commits based on `<type>`. If no `--group` option is specified, the default is `author`. `<type>` is one of: * `author`, commits are grouped by author * `committer`, commits are grouped by committer (the same as `-c`) * `trailer:<field>`, the `<field>` is interpreted as a case-insensitive commit message trailer (see [git-interpret-trailers[1]](git-interpret-trailers)). For example, if your project uses `Reviewed-by` trailers, you might want to see who has been reviewing with `git shortlog -ns --group=trailer:reviewed-by`. * `format:<format>`, any string accepted by the `--format` option of `git log`. (See the "PRETTY FORMATS" section of [git-log[1]](git-log).) Note that commits that do not include the trailer will not be counted. Likewise, commits with multiple trailers (e.g., multiple signoffs) may be counted more than once (but only once per unique trailer value in that commit). Shortlog will attempt to parse each trailer value as a `name <email>` identity. If successful, the mailmap is applied and the email is omitted unless the `--email` option is specified. If the value cannot be parsed as an identity, it will be taken literally and completely. If `--group` is specified multiple times, commits are counted under each value (but again, only once per unique value in that commit). For example, `git shortlog --group=author --group=trailer:co-authored-by` counts both authors and co-authors. -c --committer This is an alias for `--group=committer`. -w[<width>[,<indent1>[,<indent2>]]] Linewrap the output by wrapping each line at `width`. The first line of each entry is indented by `indent1` spaces, and the second and subsequent lines are indented by `indent2` spaces. `width`, `indent1`, and `indent2` default to 76, 6 and 9 respectively. If width is `0` (zero) then indent the lines of the output without wrapping them. <revision-range> Show only commits in the specified revision range. When no <revision-range> is specified, it defaults to `HEAD` (i.e. the whole history leading to the current commit). `origin..HEAD` specifies all the commits reachable from the current commit (i.e. `HEAD`), but not from `origin`. For a complete list of ways to spell <revision-range>, see the "Specifying Ranges" section of [gitrevisions[7]](gitrevisions). [--] <path>…​ Consider only commits that are enough to explain how the files that match the specified paths came to be. Paths may need to be prefixed with `--` to separate them from options or the revision range, when confusion arises. ### Commit Limiting Besides specifying a range of commits that should be listed using the special notations explained in the description, additional commit limiting may be applied. Using more options generally further limits the output (e.g. `--since=<date1>` limits to commits newer than `<date1>`, and using it with `--grep=<pattern>` further limits to commits whose log message has a line that matches `<pattern>`), unless otherwise noted. Note that these are applied before commit ordering and formatting options, such as `--reverse`. -<number> -n <number> --max-count=<number> Limit the number of commits to output. --skip=<number> Skip `number` commits before starting to show the commit output. --since=<date> --after=<date> Show commits more recent than a specific date. --since-as-filter=<date> Show all commits more recent than a specific date. This visits all commits in the range, rather than stopping at the first commit which is older than a specific date. --until=<date> --before=<date> Show commits older than a specific date. --author=<pattern> --committer=<pattern> Limit the commits output to ones with author/committer header lines that match the specified pattern (regular expression). With more than one `--author=<pattern>`, commits whose author matches any of the given patterns are chosen (similarly for multiple `--committer=<pattern>`). --grep-reflog=<pattern> Limit the commits output to ones with reflog entries that match the specified pattern (regular expression). With more than one `--grep-reflog`, commits whose reflog message matches any of the given patterns are chosen. It is an error to use this option unless `--walk-reflogs` is in use. --grep=<pattern> Limit the commits output to ones with log message that matches the specified pattern (regular expression). With more than one `--grep=<pattern>`, commits whose message matches any of the given patterns are chosen (but see `--all-match`). When `--notes` is in effect, the message from the notes is matched as if it were part of the log message. --all-match Limit the commits output to ones that match all given `--grep`, instead of ones that match at least one. --invert-grep Limit the commits output to ones with log message that do not match the pattern specified with `--grep=<pattern>`. -i --regexp-ignore-case Match the regular expression limiting patterns without regard to letter case. --basic-regexp Consider the limiting patterns to be basic regular expressions; this is the default. -E --extended-regexp Consider the limiting patterns to be extended regular expressions instead of the default basic regular expressions. -F --fixed-strings Consider the limiting patterns to be fixed strings (don’t interpret pattern as a regular expression). -P --perl-regexp Consider the limiting patterns to be Perl-compatible regular expressions. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. --remove-empty Stop when a given path disappears from the tree. --merges Print only merge commits. This is exactly the same as `--min-parents=2`. --no-merges Do not print commits with more than one parent. This is exactly the same as `--max-parents=1`. --min-parents=<number> --max-parents=<number> --no-min-parents --no-max-parents Show only commits which have at least (or at most) that many parent commits. In particular, `--max-parents=1` is the same as `--no-merges`, `--min-parents=2` is the same as `--merges`. `--max-parents=0` gives all root commits and `--min-parents=3` all octopus merges. `--no-min-parents` and `--no-max-parents` reset these limits (to no limit) again. Equivalent forms are `--min-parents=0` (any commit has 0 or more parents) and `--max-parents=-1` (negative numbers denote no upper limit). --first-parent When finding commits to include, follow only the first parent commit upon seeing a merge commit. This option can give a better overview when viewing the evolution of a particular topic branch, because merges into a topic branch tend to be only about adjusting to updated upstream from time to time, and this option allows you to ignore the individual commits brought in to your history by such a merge. --exclude-first-parent-only When finding commits to exclude (with a `^`), follow only the first parent commit upon seeing a merge commit. This can be used to find the set of changes in a topic branch from the point where it diverged from the remote branch, given that arbitrary merges can be valid topic branch changes. --not Reverses the meaning of the `^` prefix (or lack thereof) for all following revision specifiers, up to the next `--not`. --all Pretend as if all the refs in `refs/`, along with `HEAD`, are listed on the command line as `<commit>`. --branches[=<pattern>] Pretend as if all the refs in `refs/heads` are listed on the command line as `<commit>`. If `<pattern>` is given, limit branches to ones matching given shell glob. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --tags[=<pattern>] Pretend as if all the refs in `refs/tags` are listed on the command line as `<commit>`. If `<pattern>` is given, limit tags to ones matching given shell glob. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --remotes[=<pattern>] Pretend as if all the refs in `refs/remotes` are listed on the command line as `<commit>`. If `<pattern>` is given, limit remote-tracking branches to ones matching given shell glob. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --glob=<glob-pattern> Pretend as if all the refs matching shell glob `<glob-pattern>` are listed on the command line as `<commit>`. Leading `refs/`, is automatically prepended if missing. If pattern lacks `?`, `*`, or `[`, `/*` at the end is implied. --exclude=<glob-pattern> Do not include refs matching `<glob-pattern>` that the next `--all`, `--branches`, `--tags`, `--remotes`, or `--glob` would otherwise consider. Repetitions of this option accumulate exclusion patterns up to the next `--all`, `--branches`, `--tags`, `--remotes`, or `--glob` option (other options or arguments do not clear accumulated patterns). The patterns given should not begin with `refs/heads`, `refs/tags`, or `refs/remotes` when applied to `--branches`, `--tags`, or `--remotes`, respectively, and they must begin with `refs/` when applied to `--glob` or `--all`. If a trailing `/*` is intended, it must be given explicitly. --exclude-hidden=[receive|uploadpack] Do not include refs that would be hidden by `git-receive-pack` or `git-upload-pack` by consulting the appropriate `receive.hideRefs` or `uploadpack.hideRefs` configuration along with `transfer.hideRefs` (see [git-config[1]](git-config)). This option affects the next pseudo-ref option `--all` or `--glob` and is cleared after processing them. --reflog Pretend as if all objects mentioned by reflogs are listed on the command line as `<commit>`. --alternate-refs Pretend as if all objects mentioned as ref tips of alternate repositories were listed on the command line. An alternate repository is any repository whose object directory is specified in `objects/info/alternates`. The set of included objects may be modified by `core.alternateRefsCommand`, etc. See [git-config[1]](git-config). --single-worktree By default, all working trees will be examined by the following options when there are more than one (see [git-worktree[1]](git-worktree)): `--all`, `--reflog` and `--indexed-objects`. This option forces them to examine the current working tree only. --ignore-missing Upon seeing an invalid object name in the input, pretend as if the bad input was not given. --bisect Pretend as if the bad bisection ref `refs/bisect/bad` was listed and as if it was followed by `--not` and the good bisection refs `refs/bisect/good-*` on the command line. --stdin In addition to the `<commit>` listed on the command line, read them from the standard input. If a `--` separator is seen, stop reading commits and start reading paths to limit the result. --cherry-mark Like `--cherry-pick` (see below) but mark equivalent commits with `=` rather than omitting them, and inequivalent ones with `+`. --cherry-pick Omit any commit that introduces the same change as another commit on the “other side” when the set of commits are limited with symmetric difference. For example, if you have two branches, `A` and `B`, a usual way to list all commits on only one side of them is with `--left-right` (see the example below in the description of the `--left-right` option). However, it shows the commits that were cherry-picked from the other branch (for example, “3rd on b” may be cherry-picked from branch A). With this option, such pairs of commits are excluded from the output. --left-only --right-only List only commits on the respective side of a symmetric difference, i.e. only those which would be marked `<` resp. `>` by `--left-right`. For example, `--cherry-pick --right-only A...B` omits those commits from `B` which are in `A` or are patch-equivalent to a commit in `A`. In other words, this lists the `+` commits from `git cherry A B`. More precisely, `--cherry-pick --right-only --no-merges` gives the exact list. --cherry A synonym for `--right-only --cherry-mark --no-merges`; useful to limit the output to the commits on our side and mark those that have been applied to the other side of a forked history with `git log --cherry upstream...mybranch`, similar to `git cherry upstream mybranch`. -g --walk-reflogs Instead of walking the commit ancestry chain, walk reflog entries from the most recent one to older ones. When this option is used you cannot specify commits to exclude (that is, `^commit`, `commit1..commit2`, and `commit1...commit2` notations cannot be used). With `--pretty` format other than `oneline` and `reference` (for obvious reasons), this causes the output to have two extra lines of information taken from the reflog. The reflog designator in the output may be shown as `ref@{Nth}` (where `Nth` is the reverse-chronological index in the reflog) or as `ref@{timestamp}` (with the timestamp for that entry), depending on a few rules: 1. If the starting point is specified as `ref@{Nth}`, show the index format. 2. If the starting point was specified as `ref@{now}`, show the timestamp format. 3. If neither was used, but `--date` was given on the command line, show the timestamp in the format requested by `--date`. 4. Otherwise, show the index format. Under `--pretty=oneline`, the commit message is prefixed with this information on the same line. This option cannot be combined with `--reverse`. See also [git-reflog[1]](git-reflog). Under `--pretty=reference`, this information will not be shown at all. --merge After a failed merge, show refs that touch files having a conflict and don’t exist on all heads to merge. --boundary Output excluded boundary commits. Boundary commits are prefixed with `-`. ### History Simplification Sometimes you are only interested in parts of the history, for example the commits modifying a particular <path>. But there are two parts of `History Simplification`, one part is selecting the commits and the other is how to do it, as there are various strategies to simplify the history. The following options select the commits to be shown: <paths> Commits modifying the given <paths> are selected. --simplify-by-decoration Commits that are referred by some branch or tag are selected. Note that extra commits can be shown to give a meaningful history. The following options affect the way the simplification is performed: Default mode Simplifies the history to the simplest history explaining the final state of the tree. Simplest because it prunes some side branches if the end result is the same (i.e. merging branches with the same content) --show-pulls Include all commits from the default mode, but also any merge commits that are not TREESAME to the first parent but are TREESAME to a later parent. This mode is helpful for showing the merge commits that "first introduced" a change to a branch. --full-history Same as the default mode, but does not prune some history. --dense Only the selected commits are shown, plus some to have a meaningful history. --sparse All commits in the simplified history are shown. --simplify-merges Additional option to `--full-history` to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge. --ancestry-path[=<commit>] When given a range of commits to display (e.g. `commit1..commit2` or `commit2 ^commit1`), only display commits in that range that are ancestors of <commit>, descendants of <commit>, or <commit> itself. If no commit is specified, use `commit1` (the excluded part of the range) as <commit>. Can be passed multiple times; if so, a commit is included if it is any of the commits given or if it is an ancestor or descendant of one of them. A more detailed explanation follows. Suppose you specified `foo` as the <paths>. We shall call commits that modify `foo` !TREESAME, and the rest TREESAME. (In a diff filtered for `foo`, they look different and equal, respectively.) In the following, we will always refer to the same example history to illustrate the differences between simplification settings. We assume that you are filtering for a file `foo` in this commit graph: ``` .-A---M---N---O---P---Q / / / / / / I B C D E Y \ / / / / / `-------------' X ``` The horizontal line of history A---Q is taken to be the first parent of each merge. The commits are: * `I` is the initial commit, in which `foo` exists with contents “asdf”, and a file `quux` exists with contents “quux”. Initial commits are compared to an empty tree, so `I` is !TREESAME. * In `A`, `foo` contains just “foo”. * `B` contains the same change as `A`. Its merge `M` is trivial and hence TREESAME to all parents. * `C` does not change `foo`, but its merge `N` changes it to “foobar”, so it is not TREESAME to any parent. * `D` sets `foo` to “baz”. Its merge `O` combines the strings from `N` and `D` to “foobarbaz”; i.e., it is not TREESAME to any parent. * `E` changes `quux` to “xyzzy”, and its merge `P` combines the strings to “quux xyzzy”. `P` is TREESAME to `O`, but not to `E`. * `X` is an independent root commit that added a new file `side`, and `Y` modified it. `Y` is TREESAME to `X`. Its merge `Q` added `side` to `P`, and `Q` is TREESAME to `P`, but not to `Y`. `rev-list` walks backwards through history, including or excluding commits based on whether `--full-history` and/or parent rewriting (via `--parents` or `--children`) are used. The following settings are available. Default mode Commits are included if they are not TREESAME to any parent (though this can be changed, see `--sparse` below). If the commit was a merge, and it was TREESAME to one parent, follow only that parent. (Even if there are several TREESAME parents, follow only one of them.) Otherwise, follow all parents. This results in: ``` .-A---N---O / / / I---------D ``` Note how the rule to only follow the TREESAME parent, if one is available, removed `B` from consideration entirely. `C` was considered via `N`, but is TREESAME. Root commits are compared to an empty tree, so `I` is !TREESAME. Parent/child relations are only visible with `--parents`, but that does not affect the commits selected in default mode, so we have shown the parent lines. --full-history without parent rewriting This mode differs from the default in one point: always follow all parents of a merge, even if it is TREESAME to one of them. Even if more than one side of the merge has commits that are included, this does not imply that the merge itself is! In the example, we get ``` I A B N D O P Q ``` `M` was excluded because it is TREESAME to both parents. `E`, `C` and `B` were all walked, but only `B` was !TREESAME, so the others do not appear. Note that without parent rewriting, it is not really possible to talk about the parent/child relationships between the commits, so we show them disconnected. --full-history with parent rewriting Ordinary commits are only included if they are !TREESAME (though this can be changed, see `--sparse` below). Merges are always included. However, their parent list is rewritten: Along each parent, prune away commits that are not included themselves. This results in ``` .-A---M---N---O---P---Q / / / / / I B / D / \ / / / / `-------------' ``` Compare to `--full-history` without rewriting above. Note that `E` was pruned away because it is TREESAME, but the parent list of P was rewritten to contain `E`'s parent `I`. The same happened for `C` and `N`, and `X`, `Y` and `Q`. In addition to the above settings, you can change whether TREESAME affects inclusion: --dense Commits that are walked are included if they are not TREESAME to any parent. --sparse All commits that are walked are included. Note that without `--full-history`, this still simplifies merges: if one of the parents is TREESAME, we follow only that one, so the other sides of the merge are never walked. --simplify-merges First, build a history graph in the same way that `--full-history` with parent rewriting does (see above). Then simplify each commit `C` to its replacement `C'` in the final history according to the following rules: * Set `C'` to `C`. * Replace each parent `P` of `C'` with its simplification `P'`. In the process, drop parents that are ancestors of other parents or that are root commits TREESAME to an empty tree, and remove duplicates, but take care to never drop all parents that we are TREESAME to. * If after this parent rewriting, `C'` is a root or merge commit (has zero or >1 parents), a boundary commit, or !TREESAME, it remains. Otherwise, it is replaced with its only parent. The effect of this is best shown by way of comparing to `--full-history` with parent rewriting. The example turns into: ``` .-A---M---N---O / / / I B D \ / / `---------' ``` Note the major differences in `N`, `P`, and `Q` over `--full-history`: * `N`'s parent list had `I` removed, because it is an ancestor of the other parent `M`. Still, `N` remained because it is !TREESAME. * `P`'s parent list similarly had `I` removed. `P` was then removed completely, because it had one parent and is TREESAME. * `Q`'s parent list had `Y` simplified to `X`. `X` was then removed, because it was a TREESAME root. `Q` was then removed completely, because it had one parent and is TREESAME. There is another simplification mode available: --ancestry-path[=<commit>] Limit the displayed commits to those which are an ancestor of <commit>, or which are a descendant of <commit>, or are <commit> itself. As an example use case, consider the following commit history: ``` D---E-------F / \ \ B---C---G---H---I---J / \ A-------K---------------L--M ``` A regular `D..M` computes the set of commits that are ancestors of `M`, but excludes the ones that are ancestors of `D`. This is useful to see what happened to the history leading to `M` since `D`, in the sense that “what does `M` have that did not exist in `D`”. The result in this example would be all the commits, except `A` and `B` (and `D` itself, of course). When we want to find out what commits in `M` are contaminated with the bug introduced by `D` and need fixing, however, we might want to view only the subset of `D..M` that are actually descendants of `D`, i.e. excluding `C` and `K`. This is exactly what the `--ancestry-path` option does. Applied to the `D..M` range, it results in: ``` E-------F \ \ G---H---I---J \ L--M ``` We can also use `--ancestry-path=D` instead of `--ancestry-path` which means the same thing when applied to the `D..M` range but is just more explicit. If we instead are interested in a given topic within this range, and all commits affected by that topic, we may only want to view the subset of `D..M` which contain that topic in their ancestry path. So, using `--ancestry-path=H D..M` for example would result in: ``` E \ G---H---I---J \ L--M ``` Whereas `--ancestry-path=K D..M` would result in ``` K---------------L--M ``` Before discussing another option, `--show-pulls`, we need to create a new example history. A common problem users face when looking at simplified history is that a commit they know changed a file somehow does not appear in the file’s simplified history. Let’s demonstrate a new example and show how options such as `--full-history` and `--simplify-merges` works in that case: ``` .-A---M-----C--N---O---P / / \ \ \/ / / I B \ R-'`-Z' / \ / \/ / \ / /\ / `---X--' `---Y--' ``` For this example, suppose `I` created `file.txt` which was modified by `A`, `B`, and `X` in different ways. The single-parent commits `C`, `Z`, and `Y` do not change `file.txt`. The merge commit `M` was created by resolving the merge conflict to include both changes from `A` and `B` and hence is not TREESAME to either. The merge commit `R`, however, was created by ignoring the contents of `file.txt` at `M` and taking only the contents of `file.txt` at `X`. Hence, `R` is TREESAME to `X` but not `M`. Finally, the natural merge resolution to create `N` is to take the contents of `file.txt` at `R`, so `N` is TREESAME to `R` but not `C`. The merge commits `O` and `P` are TREESAME to their first parents, but not to their second parents, `Z` and `Y` respectively. When using the default mode, `N` and `R` both have a TREESAME parent, so those edges are walked and the others are ignored. The resulting history graph is: ``` I---X ``` When using `--full-history`, Git walks every edge. This will discover the commits `A` and `B` and the merge `M`, but also will reveal the merge commits `O` and `P`. With parent rewriting, the resulting graph is: ``` .-A---M--------N---O---P / / \ \ \/ / / I B \ R-'`--' / \ / \/ / \ / /\ / `---X--' `------' ``` Here, the merge commits `O` and `P` contribute extra noise, as they did not actually contribute a change to `file.txt`. They only merged a topic that was based on an older version of `file.txt`. This is a common issue in repositories using a workflow where many contributors work in parallel and merge their topic branches along a single trunk: many unrelated merges appear in the `--full-history` results. When using the `--simplify-merges` option, the commits `O` and `P` disappear from the results. This is because the rewritten second parents of `O` and `P` are reachable from their first parents. Those edges are removed and then the commits look like single-parent commits that are TREESAME to their parent. This also happens to the commit `N`, resulting in a history view as follows: ``` .-A---M--. / / \ I B R \ / / \ / / `---X--' ``` In this view, we see all of the important single-parent changes from `A`, `B`, and `X`. We also see the carefully-resolved merge `M` and the not-so-carefully-resolved merge `R`. This is usually enough information to determine why the commits `A` and `B` "disappeared" from history in the default view. However, there are a few issues with this approach. The first issue is performance. Unlike any previous option, the `--simplify-merges` option requires walking the entire commit history before returning a single result. This can make the option difficult to use for very large repositories. The second issue is one of auditing. When many contributors are working on the same repository, it is important which merge commits introduced a change into an important branch. The problematic merge `R` above is not likely to be the merge commit that was used to merge into an important branch. Instead, the merge `N` was used to merge `R` and `X` into the important branch. This commit may have information about why the change `X` came to override the changes from `A` and `B` in its commit message. --show-pulls In addition to the commits shown in the default history, show each merge commit that is not TREESAME to its first parent but is TREESAME to a later parent. When a merge commit is included by `--show-pulls`, the merge is treated as if it "pulled" the change from another branch. When using `--show-pulls` on this example (and no other options) the resulting graph is: ``` I---X---R---N ``` Here, the merge commits `R` and `N` are included because they pulled the commits `X` and `R` into the base branch, respectively. These merges are the reason the commits `A` and `B` do not appear in the default history. When `--show-pulls` is paired with `--simplify-merges`, the graph includes all of the necessary information: ``` .-A---M--. N / / \ / I B R \ / / \ / / `---X--' ``` Notice that since `M` is reachable from `R`, the edge from `N` to `M` was simplified away. However, `N` still appears in the history as an important commit because it "pulled" the change `R` into the main branch. The `--simplify-by-decoration` option allows you to view only the big picture of the topology of the history, by omitting commits that are not referenced by tags. Commits are marked as !TREESAME (in other words, kept after history simplification rules described above) if (1) they are referenced by tags, or (2) they change the contents of the paths given on the command line. All other commits are marked as TREESAME (subject to be simplified away). Mapping authors --------------- See [gitmailmap[5]](gitmailmap). Note that if `git shortlog` is run outside of a repository (to process log contents on standard input), it will look for a `.mailmap` file in the current directory.
programming_docs
git git-diff git-diff ======== Name ---- git-diff - Show changes between commits, commit and working tree, etc Synopsis -------- ``` git diff [<options>] [<commit>] [--] [<path>…​] git diff [<options>] --cached [--merge-base] [<commit>] [--] [<path>…​] git diff [<options>] [--merge-base] <commit> [<commit>…​] <commit> [--] [<path>…​] git diff [<options>] <commit>…​<commit> [--] [<path>…​] git diff [<options>] <blob> <blob> git diff [<options>] --no-index [--] <path> <path> ``` Description ----------- Show changes between the working tree and the index or a tree, changes between the index and a tree, changes between two trees, changes resulting from a merge, changes between two blob objects, or changes between two files on disk. *git diff* [<options>] [--] [<path>…​] This form is to view the changes you made relative to the index (staging area for the next commit). In other words, the differences are what you `could` tell Git to further add to the index but you still haven’t. You can stage these changes by using [git-add[1]](git-add). *git diff* [<options>] --no-index [--] <path> <path> This form is to compare the given two paths on the filesystem. You can omit the `--no-index` option when running the command in a working tree controlled by Git and at least one of the paths points outside the working tree, or when running the command outside a working tree controlled by Git. This form implies `--exit-code`. *git diff* [<options>] --cached [--merge-base] [<commit>] [--] [<path>…​] This form is to view the changes you staged for the next commit relative to the named <commit>. Typically you would want comparison with the latest commit, so if you do not give <commit>, it defaults to HEAD. If HEAD does not exist (e.g. unborn branches) and <commit> is not given, it shows all staged changes. --staged is a synonym of --cached. If --merge-base is given, instead of using <commit>, use the merge base of <commit> and HEAD. `git diff --cached --merge-base A` is equivalent to `git diff --cached $(git merge-base A HEAD)`. *git diff* [<options>] [--merge-base] <commit> [--] [<path>…​] This form is to view the changes you have in your working tree relative to the named <commit>. You can use HEAD to compare it with the latest commit, or a branch name to compare with the tip of a different branch. If --merge-base is given, instead of using <commit>, use the merge base of <commit> and HEAD. `git diff --merge-base A` is equivalent to `git diff $(git merge-base A HEAD)`. *git diff* [<options>] [--merge-base] <commit> <commit> [--] [<path>…​] This is to view the changes between two arbitrary <commit>. If --merge-base is given, use the merge base of the two commits for the "before" side. `git diff --merge-base A B` is equivalent to `git diff $(git merge-base A B) B`. *git diff* [<options>] <commit> <commit>…​ <commit> [--] [<path>…​] This form is to view the results of a merge commit. The first listed <commit> must be the merge itself; the remaining two or more commits should be its parents. Convenient ways to produce the desired set of revisions are to use the suffixes `^@` and `^!`. If A is a merge commit, then `git diff A A^@`, `git diff A^!` and `git show A` all give the same combined diff. *git diff* [<options>] <commit>..<commit> [--] [<path>…​] This is synonymous to the earlier form (without the `..`) for viewing the changes between two arbitrary <commit>. If <commit> on one side is omitted, it will have the same effect as using HEAD instead. *git diff* [<options>] <commit>...<commit> [--] [<path>…​] This form is to view the changes on the branch containing and up to the second <commit>, starting at a common ancestor of both <commit>. `git diff A...B` is equivalent to `git diff $(git merge-base A B) B`. You can omit any one of <commit>, which has the same effect as using HEAD instead. Just in case you are doing something exotic, it should be noted that all of the <commit> in the above description, except in the `--merge-base` case and in the last two forms that use `..` notations, can be any <tree>. For a more complete list of ways to spell <commit>, see "SPECIFYING REVISIONS" section in [gitrevisions[7]](gitrevisions). However, "diff" is about comparing two `endpoints`, not ranges, and the range notations (`<commit>..<commit>` and `<commit>...<commit>`) do not mean a range as defined in the "SPECIFYING RANGES" section in [gitrevisions[7]](gitrevisions). *git diff* [<options>] <blob> <blob> This form is to view the differences between the raw contents of two blob objects. Options ------- -p -u --patch Generate patch (see section on generating patches). This is the default. -s --no-patch Suppress diff output. Useful for commands like `git show` that show the patch by default, or to cancel the effect of `--patch`. -U<n> --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies `--patch`. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char> --output-indicator-old=<char> --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are `+`, `-` and ' ' respectively. --raw Generate the diff in raw format. --patch-with-raw Synonym for `-p --raw`. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the `diff.algorithm` variable to a non-default value and want to use the default one, then you have to use `--diff-algorithm=default` option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by `<width>`. The width of the filename part can be limited by giving another width `<name-width>` after a comma. The width of the graph part can be limited by using `--stat-graph-width=<width>` (affects all commands generating a stat graph) or by setting `diff.statGraphWidth=<width>` (does not affect `git format-patch`). By giving a third parameter `<count>`, you can limit the output to the first `<count>` lines, followed by `...` if there are more. These parameters can also be set individually with `--stat-width=<width>`, `--stat-name-width=<name-width>` and `--stat-count=<count>`. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies `--stat`. --numstat Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. --shortstat Output only the last line of the `--stat` format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,…​>] --dirstat[=<param1,param2,…​>] Output the distribution of relative amount of changes for each sub-directory. The behavior of `--dirstat` can be customized by passing it a comma separated list of parameters. The defaults are controlled by the `diff.dirstat` configuration variable (see [git-config[1]](git-config)). The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `--dirstat=files,10,cumulative`. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>…​] Synonym for --dirstat=files,param1,param2…​ --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for `-p --stat`. -z When `--raw`, `--numstat`, `--name-only` or `--name-status` has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the [git-log[1]](git-log) manual page. --name-status Show only names and status of changed files. See the description of the `--diff-filter` option on what the status letters mean. Just like `--name-only` the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying `--submodule=short` the `short` format is used. This format just shows the names of the commits at the beginning and end of the range. When `--submodule` or `--submodule=log` is specified, the `log` format is used. This format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. When `--submodule=diff` is specified, the `diff` format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to `diff.submodule` or the `short` format if the config option is unset. --color[=<when>] Show colored diff. `--color` (i.e. without `=<when>`) is the same as `--color=always`. `<when>` can be one of `always`, `never`, or `auto`. It can be changed by the `color.ui` and `color.diff` configuration settings. --no-color Turn off colored diff. This can be used to override configuration settings. It is the same as `--color=never`. --color-moved[=<mode>] Moved lines of code are colored differently. It can be changed by the `diff.colorMoved` configuration setting. The <mode> defaults to `no` if the option is not given and to `zebra` if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for `zebra`. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with `color.diff.newMoved`. Similarly `color.diff.oldMoved` will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the `color.diff.{old,new}Moved` color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in `blocks` mode. The blocks are painted using either the `color.diff.{old,new}Moved` color or `color.diff.{old,new}MovedAlternative`. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to `zebra`, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. `dimmed_zebra` is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as `--color-moved=no`. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for `--color-moved`. It can be set by the `diff.colorMovedWS` configuration setting. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as `--color-moved-ws=no`. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see `--word-diff-regex` below. The <mode> defaults to `plain`, and must be one of: color Highlight changed words using only colors. Implies `--color`. plain Show words as `[-removed-]` and `{+added+}`. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a `+`/`-`/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde `~` on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies `--word-diff` unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append `|[^[:space:]]` to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, `--word-diff-regex=.` will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see [gitattributes[5]](gitattributes) or [git-config[1]](git-config). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to `--word-diff=color` plus (if a regex was specified) `--word-diff-regex=<regex>`. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by `core.whitespace` configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. When this option is not given, and the configuration variable `diff.wsErrorHighlight` is not set, only whitespace errors in `new` lines are highlighted. The whitespace errors are colored with `color.diff.whitespace`. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to `--full-index`, output a binary diff that can be applied with `git-apply`. Implies `--patch`. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. In diff-patch output format, `--full-index` takes higher precedence, i.e. if `--full-index` is specified, full blob names will be shown regardless of `--abbrev`. Non default number of digits can be specified with `--abbrev=<n>`. -B[<n>][/<m>] --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number `m` controls this aspect of the -B option (defaults to 60%). `-B/70%` specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number `n` controls this aspect of the -B option (defaults to 50%). `-B20%` specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>] --find-renames[=<n>] Detect renames. If `n` is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, `-M90%` means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a `%` sign, the number is to be read as a fraction, with a decimal point before it. I.e., `-M5` becomes 0.5, and is thus the same as `-M50%`. Similarly, `-M05` is the same as `-M5%`. To limit detection to exact renames, use `-M100%`. The default similarity index is 50%. -C[<n>] --find-copies[=<n>] Detect copies as well as renames. See also `--find-copies-harder`. If `n` is specified, it has the same meaning as for `-M<n>`. --find-copies-harder For performance reasons, by default, `-C` option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one `-C` option has the same effect. -D --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and `/dev/null`. The resulting patch is not meant to be applied with `patch` or `git apply`; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with `-B`, omit also the preimage in the deletion part of a delete/create pair. -l<num> The `-M` and `-C` options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)…​[\*]] Select only files that are Added (`A`), Copied (`C`), Deleted (`D`), Modified (`M`), Renamed (`R`), have their type (i.e. regular file, symlink, submodule, …​) changed (`T`), are Unmerged (`U`), are Unknown (`X`), or have had their pairing Broken (`B`). Any combination of the filter characters (including none) can be used. When `*` (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. `--diff-filter=ad` excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into `-S`, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between `-S<regex> --pickaxe-regex` and `-G<regex>`, consider a commit with the following diff in the same file: ``` + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); ``` While `git log -G"frotz\(nitfol"` will show this commit, `git log -S"frotz\(nitfol" --pickaxe-regex` will not (because the number of occurrences of that string did not change). Unless `--text` is supplied patches of binary files without a textconv filter will be ignored. See the `pickaxe` entry in [gitdiffcore[7]](gitdiffcore) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to `-S`, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the `-t` option in `git-log` to also find trees. --pickaxe-all When `-S` or `-G` finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to `-S` as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the `diff.orderFile` configuration variable (see [git-config[1]](git-config)). To cancel `diff.orderFile`, use `-O/dev/null`. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: * Blank lines are ignored, so they can be used as separators for readability. * Lines starting with a hash ("`#`") are ignored, so they can be used for comments. Add a backslash ("`\`") to the beginning of the pattern if it starts with a hash. * Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM\_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "`foo*bar`" matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`". --skip-to=<file> --rotate-to=<file> Discard the files before the named <file> from the output (i.e. `skip to`), or move them to the end of the output (i.e. `rotate to`). These were invented primarily for use of the `git difftool` command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>] --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. `--no-relative` can be used to countermand both `diff.relative` config option and previous `--relative`. -a --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex> --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to `diff.interHunkContext` or 0 if the config option is unset. -W --function-context Show whole function as context lines for each change. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies `--exit-code`. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with [gitattributes[5]](gitattributes), you need to use this option with [git-log[1]](git-log) and friends. --no-ext-diff Disallow external diff drivers. --textconv --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See [gitattributes[5]](gitattributes) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for [git-diff[1]](git-diff) and [git-log[1]](git-log), but not for [git-format-patch[1]](git-format-patch) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the `ignore` option in [git-config[1]](git-config) or [gitmodules[5]](gitmodules). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with `--ita-visible-in-index`. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also [gitdiffcore[7]](gitdiffcore). -1 --base -2 --ours -3 --theirs Compare the working tree with the "base" version (stage #1), "our branch" (stage #2) or "their branch" (stage #3). The index contains these stages only for unmerged entries i.e. while resolving conflicts. See [git-read-tree[1]](git-read-tree) section "3-Way Merge" for detailed information. -0 Omit diff output for unmerged entries and just show "Unmerged". Can be used only when comparing the working tree with the index. <path>…​ The <paths> parameters, when given, are used to limit the diff to the named paths (you can give directory names and get diff for all files under them). Raw output format ----------------- The raw output format from "git-diff-index", "git-diff-tree", "git-diff-files" and "git diff --raw" are very similar. These commands all compare two sets of things; what is compared differs: git-diff-index <tree-ish> compares the <tree-ish> and the files on the filesystem. git-diff-index --cached <tree-ish> compares the <tree-ish> and the index. git-diff-tree [-r] <tree-ish-1> <tree-ish-2> [<pattern>…​] compares the trees named by the two arguments. git-diff-files [<pattern>…​] compares the index and the files on the filesystem. The "git-diff-tree" command begins its output by printing the hash of what is being compared. After that, all the commands print one output line per changed file. An output line is formatted this way: ``` in-place edit :100644 100644 bcd1234 0123456 M file0 copy-edit :100644 100644 abcd123 1234567 C68 file1 file2 rename-edit :100644 100644 abcd123 1234567 R86 file1 file3 create :000000 100644 0000000 1234567 A file4 delete :100644 000000 1234567 0000000 D file5 unmerged :000000 000000 0000000 0000000 U file6 ``` That is, from the left to the right: 1. a colon. 2. mode for "src"; 000000 if creation or unmerged. 3. a space. 4. mode for "dst"; 000000 if deletion or unmerged. 5. a space. 6. sha1 for "src"; 0{40} if creation or unmerged. 7. a space. 8. sha1 for "dst"; 0{40} if deletion, unmerged or "work tree out of sync with the index". 9. a space. 10. status, followed by optional "score" number. 11. a tab or a NUL when `-z` option is used. 12. path for "src" 13. a tab or a NUL when `-z` option is used; only exists for C or R. 14. path for "dst"; only exists for C or R. 15. an LF or a NUL when `-z` option is used, to terminate the record. Possible status letters are: * A: addition of a file * C: copy of a file into a new one * D: deletion of a file * M: modification of the contents or mode of a file * R: renaming of a file * T: change in the type of the file (regular file, symbolic link or submodule) * U: file is unmerged (you must complete the merge before it can be committed) * X: "unknown" change type (most probably a bug, please report it) Status letters C and R are always followed by a score (denoting the percentage of similarity between the source and target of the move or copy). Status letter M may be followed by a score (denoting the percentage of dissimilarity) for file rewrites. The sha1 for "dst" is shown as all 0’s if a file on the filesystem is out of sync with the index. Example: ``` :100644 100644 5be4a4a 0000000 M file.c ``` Without the `-z` option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). Using `-z` the filename is output verbatim and the line is terminated by a NUL byte. Diff format for merges ---------------------- "git-diff-tree", "git-diff-files" and "git-diff --raw" can take `-c` or `--cc` option to generate diff output also for merge commits. The output differs from the format described above in the following way: 1. there is a colon for each parent 2. there are more "src" modes and "src" sha1 3. status is concatenated status characters for each parent 4. no optional "score" number 5. tab-separated pathname(s) of the file For `-c` and `--cc`, only the destination or final path is shown even if the file was renamed on any side of history. With `--combined-all-paths`, the name of the path in each parent is shown followed by the name of the path in the merge commit. Examples for `-c` and `--cc` without `--combined-all-paths`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR phooey.c ``` Examples when `--combined-all-paths` added to either `-c` or `--cc`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c desc.c desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM foo.sh bar.sh bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR fooey.c fuey.c phooey.c ``` Note that `combined diff` lists only files which were modified from all parents. Generating patch text with -p ----------------------------- Running [git-diff[1]](git-diff), [git-log[1]](git-log), [git-show[1]](git-show), [git-diff-index[1]](git-diff-index), [git-diff-tree[1]](git-diff-tree), or [git-diff-files[1]](git-diff-files) with the `-p` option produces patch text. You can customize the creation of patch text via the `GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables (see [git[1]](git)), and the `diff` attribute (see [gitattributes[5]](gitattributes)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded with a "git diff" header that looks like this: ``` diff --git a/file1 b/file2 ``` The `a/` and `b/` filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, `/dev/null` is `not` used in place of the `a/` or `b/` filenames. When rename/copy is involved, `file1` and `file2` show the name of the source file of the rename/copy and the name of the file that rename/copy produces, respectively. 2. It is followed by one or more extended header lines: ``` old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> ``` File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the `a/` and `b/` prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). 4. All the `file1` files in the output refer to files before the commit, and all the `file2` files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: ``` diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a ``` 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in [gitattributes[5]](gitattributes) for details of how to tailor to this to specific languages. Combined diff format -------------------- Any diff-generating command can take the `-c` or `--cc` option to produce a `combined diff` when showing a merge. This is the default format when showing merges with [git-diff[1]](git-diff) or [git-show[1]](git-show). Note also that you can give suitable `--diff-merges` option to any of these commands to force generation of diffs in specific format. A "combined diff" format looks like this: ``` diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); ``` 1. It is preceded with a "git diff" header, that looks like this (when the `-c` option is used): ``` diff --combined file ``` or like this (when the `--cc` option is used): ``` diff --cc file ``` 2. It is followed by one or more extended header lines (this example shows a merge with two parents): ``` index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> ``` The `mode <mode>,<mode>..<mode>` line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected contents movement (renames and copying detection) are designed to work with diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by two-line from-file/to-file header ``` --- a/file +++ b/file ``` Similar to two-line header for traditional `unified` diff format, `/dev/null` is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file you get a N+1 line from-file/to-file header, where N is the number of parents in the merge commit ``` --- a/file --- a/file --- a/file +++ b/file ``` This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to `patch -p1`. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended `index` header: ``` @@@ <from-file-range> <from-file-range> <to-file-range> @@@ ``` There are (number of parents + 1) `@` characters in the chunk header for combined diff format. Unlike the traditional `unified` diff format, which shows two files A and B with a single column that has `-` (minus — appears in A but removed in B), `+` (plus — missing in A but added to B), or `" "` (space — unchanged) prefix, this format compares two or more files file1, file2,…​ with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how X’s line is different from it. A `-` character in the column N means that the line appears in fileN but it does not appear in the result. A `+` character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two `-` removals from both file1 and file2, plus `++` to mean one line that was added does not appear in either file1 or file2). Also eight other lines are the same from file1 but do not appear in file2 (hence prefixed with `+`). When shown by `git diff-tree -c`, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by `git diff-files -c`, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). Other diff formats ------------------ The `--summary` option describes newly added, deleted, renamed and copied files. The `--stat` option adds diffstat(1) graph to the output. These options can be combined with other options, such as `-p`, and are meant for human consumption. When showing a change that involves a rename or a copy, `--stat` output formats the pathnames compactly by combining common prefix and suffix of the pathnames. For example, a change that moves `arch/i386/Makefile` to `arch/x86/Makefile` while modifying 4 lines will be shown like this: ``` arch/{i386 => x86}/Makefile | 4 +-- ``` The `--numstat` option gives the diffstat(1) information but is designed for easier machine consumption. An entry in `--numstat` output looks like this: ``` 1 2 README 3 1 arch/{i386 => x86}/Makefile ``` That is, from left to right: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. pathname (possibly with rename/copy information); 6. a newline. When `-z` output option is in effect, the output is formatted this way: ``` 1 2 README NUL 3 1 NUL arch/i386/Makefile NUL arch/x86/Makefile NUL ``` That is: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. a NUL (only exists if renamed/copied); 6. pathname in preimage; 7. a NUL (only exists if renamed/copied); 8. pathname in postimage (only exists if renamed/copied); 9. a NUL. The extra `NUL` before the preimage path in renamed case is to allow scripts that read the output to tell if the current record being read is a single-path record or a rename/copy record without reading ahead. After reading added and deleted lines, reading up to `NUL` would yield the pathname, but if that is `NUL`, the record will show two paths. Examples -------- Various ways to check your working tree ``` $ git diff (1) $ git diff --cached (2) $ git diff HEAD (3) ``` 1. Changes in the working tree not yet staged for the next commit. 2. Changes between the index and your last commit; what you would be committing if you run `git commit` without `-a` option. 3. Changes in the working tree since your last commit; what you would be committing if you run `git commit -a` Comparing with arbitrary commits ``` $ git diff test (1) $ git diff HEAD -- ./test (2) $ git diff HEAD^ HEAD (3) ``` 1. Instead of using the tip of the current branch, compare with the tip of "test" branch. 2. Instead of comparing with the tip of "test" branch, compare with the tip of the current branch, but limit the comparison to the file "test". 3. Compare the version before the last commit and the last commit. Comparing branches ``` $ git diff topic master (1) $ git diff topic..master (2) $ git diff topic...master (3) ``` 1. Changes between the tips of the topic and the master branches. 2. Same as above. 3. Changes that occurred on the master branch since when the topic branch was started off it. Limiting the diff output ``` $ git diff --diff-filter=MRC (1) $ git diff --name-status (2) $ git diff arch/i386 include/asm-i386 (3) ``` 1. Show only modification, rename, and copy, but not addition or deletion. 2. Show only names and the nature of change, but not actual diff output. 3. Limit diff output to named subtrees. Munging the diff output ``` $ git diff --find-copies-harder -B -C (1) $ git diff -R (2) ``` 1. Spend extra cycles to find renames, copies and complete rewrites (very expensive). 2. Output diff in reverse. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: diff.autoRefreshIndex When using `git diff` to compare with work tree files, do not consider stat-only change as changed. Instead, silently run `git update-index --refresh` to update the cached stat information for paths whose contents in the work tree match the contents in the index. This option defaults to true. Note that this affects only `git diff` Porcelain, and not lower level `diff` commands such as `git diff-files`. diff.dirstat A comma separated list of `--dirstat` parameters specifying the default behavior of the `--dirstat` option to [git-diff[1]](git-diff) and friends. The defaults can be overridden on the command line (using `--dirstat=<param1,param2,...>`). The fallback defaults (when not changed by `diff.dirstat`) are `changes,noncumulative,3`. The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `files,10,cumulative`. diff.statGraphWidth Limit the width of the graph part in --stat output. If set, applies to all commands generating --stat output except format-patch. diff.context Generate diffs with <n> lines of context instead of the default of 3. This value is overridden by the -U option. diff.interHunkContext Show the context between diff hunks, up to the specified number of lines, thereby fusing the hunks that are close to each other. This value serves as the default for the `--inter-hunk-context` command line option. diff.external If this config variable is set, diff generation is not performed using the internal diff machinery, but using the given command. Can be overridden with the ‘GIT\_EXTERNAL\_DIFF’ environment variable. The command is called with parameters as described under "git Diffs" in [git[1]](git). Note: if you want to use an external diff program only on a subset of your files, you might want to use [gitattributes[5]](gitattributes) instead. diff.ignoreSubmodules Sets the default value of --ignore-submodules. Note that this affects only `git diff` Porcelain, and not lower level `diff` commands such as `git diff-files`. `git checkout` and `git switch` also honor this setting when reporting uncommitted changes. Setting it to `all` disables the submodule summary normally shown by `git commit` and `git status` when `status.submoduleSummary` is set unless it is overridden by using the --ignore-submodules command-line option. The `git submodule` commands are not affected by this setting. By default this is set to untracked so that any untracked submodules are ignored. diff.mnemonicPrefix If set, `git diff` uses a prefix pair that is different from the standard "a/" and "b/" depending on what is being compared. When this configuration is in effect, reverse diff output also swaps the order of the prefixes: `git diff` compares the (i)ndex and the (w)ork tree; `git diff HEAD` compares a (c)ommit and the (w)ork tree; `git diff --cached` compares a (c)ommit and the (i)ndex; `git diff HEAD:file1 file2` compares an (o)bject and a (w)ork tree entity; `git diff --no-index a b` compares two non-git things (1) and (2). diff.noprefix If set, `git diff` does not show any source or destination prefix. diff.relative If set to `true`, `git diff` does not show changes outside of the directory and show pathnames relative to the current directory. diff.orderFile File indicating how to order files within a diff. See the `-O` option to [git-diff[1]](git-diff) for details. If `diff.orderFile` is a relative pathname, it is treated as relative to the top of the working tree. diff.renameLimit The number of files to consider in the exhaustive portion of copy/rename detection; equivalent to the `git diff` option `-l`. If not set, the default value is currently 1000. This setting has no effect if rename detection is turned off. diff.renames Whether and how Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. If set to "copies" or "copy", Git will detect copies, as well. Defaults to true. Note that this affects only `git diff` Porcelain like [git-diff[1]](git-diff) and [git-log[1]](git-log), and not lower level commands such as [git-diff-files[1]](git-diff-files). diff.suppressBlankEmpty A boolean to inhibit the standard behavior of printing a space before each empty output line. Defaults to false. diff.submodule Specify the format in which differences in submodules are shown. The "short" format just shows the names of the commits at the beginning and end of the range. The "log" format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. The "diff" format shows an inline diff of the changed contents of the submodule. Defaults to "short". diff.wordRegex A POSIX Extended Regular Expression used to determine what is a "word" when performing word-by-word difference calculations. Character sequences that match the regular expression are "words", all other characters are **ignorable** whitespace. diff.<driver>.command The custom diff driver command. See [gitattributes[5]](gitattributes) for details. diff.<driver>.xfuncname The regular expression that the diff driver should use to recognize the hunk header. A built-in pattern may also be used. See [gitattributes[5]](gitattributes) for details. diff.<driver>.binary Set this option to true to make the diff driver treat files as binary. See [gitattributes[5]](gitattributes) for details. diff.<driver>.textconv The command that the diff driver should call to generate the text-converted version of a file. The result of the conversion is used to generate a human-readable diff. See [gitattributes[5]](gitattributes) for details. diff.<driver>.wordRegex The regular expression that the diff driver should use to split words in a line. See [gitattributes[5]](gitattributes) for details. diff.<driver>.cachetextconv Set this option to true to make the diff driver cache the text conversion outputs. See [gitattributes[5]](gitattributes) for details. * araxis * bc * codecompare * deltawalker * diffmerge * diffuse * ecmerge * emerge * examdiff * guiffy * gvimdiff * kdiff3 * kompare * meld * nvimdiff * opendiff * p4merge * smerge * tkdiff * vimdiff * winmerge * xxdiff diff.indentHeuristic Set this option to `false` to disable the default heuristics that shift diff hunk boundaries to make patches easier to read. diff.algorithm Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". diff.wsErrorHighlight Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. The whitespace errors are colored with `color.diff.whitespace`. The command line option `--ws-error-highlight=<kind>` overrides this setting. diff.colorMoved If set to either a valid `<mode>` or a true value, moved lines in a diff are colored differently, for details of valid modes see `--color-moved` in [git-diff[1]](git-diff). If simply set to true the default color mode will be used. When set to false, moved lines are not colored. diff.colorMovedWS When moved lines are colored using e.g. the `diff.colorMoved` setting, this option controls the `<mode>` how spaces are treated for details of valid modes see `--color-moved-ws` in [git-diff[1]](git-diff). See also -------- diff(1), [git-difftool[1]](git-difftool), [git-log[1]](git-log), [gitdiffcore[7]](gitdiffcore), [git-format-patch[1]](git-format-patch), [git-apply[1]](git-apply), [git-show[1]](git-show)
programming_docs
git git-commit-graph git-commit-graph ================ Name ---- git-commit-graph - Write and verify Git commit-graph files Synopsis -------- ``` git commit-graph verify [--object-dir <dir>] [--shallow] [--[no-]progress] git commit-graph write [--object-dir <dir>] [--append] [--split[=<strategy>]] [--reachable | --stdin-packs | --stdin-commits] [--changed-paths] [--[no-]max-new-filters <n>] [--[no-]progress] <split options> ``` Description ----------- Manage the serialized commit-graph file. Options ------- --object-dir Use given directory for the location of packfiles and commit-graph file. This parameter exists to specify the location of an alternate that only has the objects directory, not a full `.git` directory. The commit-graph file is expected to be in the `<dir>/info` directory and the packfiles are expected to be in `<dir>/pack`. If the directory could not be made into an absolute path, or does not match any known object directory, `git commit-graph ...` will exit with non-zero status. --[no-]progress Turn progress on/off explicitly. If neither is specified, progress is shown if standard error is connected to a terminal. Commands -------- *write* Write a commit-graph file based on the commits found in packfiles. If the config option `core.commitGraph` is disabled, then this command will output a warning, then return success without writing a commit-graph file. With the `--stdin-packs` option, generate the new commit graph by walking objects only in the specified pack-indexes. (Cannot be combined with `--stdin-commits` or `--reachable`.) With the `--stdin-commits` option, generate the new commit graph by walking commits starting at the commits specified in stdin as a list of OIDs in hex, one OID per line. OIDs that resolve to non-commits (either directly, or by peeling tags) are silently ignored. OIDs that are malformed, or do not exist generate an error. (Cannot be combined with `--stdin-packs` or `--reachable`.) With the `--reachable` option, generate the new commit graph by walking commits starting at all refs. (Cannot be combined with `--stdin-commits` or `--stdin-packs`.) With the `--append` option, include all commits that are present in the existing commit-graph file. With the `--changed-paths` option, compute and write information about the paths changed between a commit and its first parent. This operation can take a while on large repositories. It provides significant performance gains for getting history of a directory or a file with `git log -- <path>`. If this option is given, future commit-graph writes will automatically assume that this option was intended. Use `--no-changed-paths` to stop storing this data. With the `--max-new-filters=<n>` option, generate at most `n` new Bloom filters (if `--changed-paths` is specified). If `n` is `-1`, no limit is enforced. Only commits present in the new layer count against this limit. To retroactively compute Bloom filters over earlier layers, it is advised to use `--split=replace`. Overrides the `commitGraph.maxNewFilters` configuration. With the `--split[=<strategy>]` option, write the commit-graph as a chain of multiple commit-graph files stored in `<dir>/info/commit-graphs`. Commit-graph layers are merged based on the strategy and other splitting options. The new commits not already in the commit-graph are added in a new "tip" file. This file is merged with the existing file if the following merge conditions are met: * If `--split=no-merge` is specified, a merge is never performed, and the remaining options are ignored. `--split=replace` overwrites the existing chain with a new one. A bare `--split` defers to the remaining options. (Note that merging a chain of commit graphs replaces the existing chain with a length-1 chain where the first and only incremental holds the entire graph). * If `--size-multiple=<X>` is not specified, let `X` equal 2. If the new tip file would have `N` commits and the previous tip has `M` commits and `X` times `N` is greater than `M`, instead merge the two files into a single file. * If `--max-commits=<M>` is specified with `M` a positive integer, and the new tip file would have more than `M` commits, then instead merge the new tip with the previous tip. Finally, if `--expire-time=<datetime>` is not specified, let `datetime` be the current time. After writing the split commit-graph, delete all unused commit-graph whose modified times are older than `datetime`. *verify* Read the commit-graph file and verify its contents against the object database. Used to check for corrupted data. With the `--shallow` option, only check the tip commit-graph file in a chain of split commit-graphs. Examples -------- * Write a commit-graph file for the packed commits in your local `.git` directory. ``` $ git commit-graph write ``` * Write a commit-graph file, extending the current commit-graph file using commits in `<pack-index>`. ``` $ echo <pack-index> | git commit-graph write --stdin-packs ``` * Write a commit-graph file containing all reachable commits. ``` $ git show-ref -s | git commit-graph write --stdin-commits ``` * Write a commit-graph file containing all commits in the current commit-graph file along with those reachable from `HEAD`. ``` $ git rev-parse HEAD | git commit-graph write --stdin-commits --append ``` Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: commitGraph.generationVersion Specifies the type of generation number version to use when writing or reading the commit-graph file. If version 1 is specified, then the corrected commit dates will not be written or read. Defaults to 2. commitGraph.maxNewFilters Specifies the default value for the `--max-new-filters` option of `git commit-graph write` (c.f., [git-commit-graph[1]](git-commit-graph)). commitGraph.readChangedPaths If true, then git will use the changed-path Bloom filters in the commit-graph file (if it exists, and they are present). Defaults to true. See [git-commit-graph[1]](git-commit-graph) for more information. File format ----------- see [gitformat-commit-graph[5]](gitformat-commit-graph). git git-verify-tag git-verify-tag ============== Name ---- git-verify-tag - Check the GPG signature of tags Synopsis -------- ``` git verify-tag [-v | --verbose] [--format=<format>] [--raw] <tag>…​ ``` Description ----------- Validates the gpg signature created by `git tag`. Options ------- --raw Print the raw gpg status output to standard error instead of the normal human-readable output. -v --verbose Print the contents of the tag object before validating it. <tag>…​ SHA-1 identifiers of Git tag objects. git user-manual user-manual =========== Introduction ------------ Git is a fast distributed revision control system. This manual is designed to be readable by someone with basic UNIX command-line skills, but no previous knowledge of Git. [Repositories and Branches](#repositories-and-branches) and [Exploring Git history](#exploring-git-history) explain how to fetch and study a project using git—​read these chapters to learn how to build and test a particular version of a software project, search for regressions, and so on. People needing to do actual development will also want to read [Developing with Git](#Developing-With-git) and [Sharing development with others](#sharing-development). Further chapters cover more specialized topics. Comprehensive reference documentation is available through the man pages, or [git-help[1]](git-help) command. For example, for the command `git clone <repo>`, you can either use: ``` $ man git-clone ``` or: ``` $ git help clone ``` With the latter, you can use the manual viewer of your choice; see [git-help[1]](git-help) for more information. See also [Git Quick Reference](#git-quick-start) for a brief overview of Git commands, without any explanation. Finally, see [Notes and todo list for this manual](#todo) for ways that you can help make this manual more complete. Repositories and branches ------------------------- ### How to get a Git repository It will be useful to have a Git repository to experiment with as you read this manual. The best way to get one is by using the [git-clone[1]](git-clone) command to download a copy of an existing repository. If you don’t already have a project in mind, here are some interesting examples: ``` # Git itself (approx. 40MB download): $ git clone git://git.kernel.org/pub/scm/git/git.git # the Linux kernel (approx. 640MB download): $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ``` The initial clone may be time-consuming for a large project, but you will only need to clone once. The clone command creates a new directory named after the project (`git` or `linux` in the examples above). After you cd into this directory, you will see that it contains a copy of the project files, called the [working tree](#def_working_tree), together with a special top-level directory named `.git`, which contains all the information about the history of the project. ### How to check out a different version of a project Git is best thought of as a tool for storing the history of a collection of files. It stores the history as a compressed collection of interrelated snapshots of the project’s contents. In Git each such version is called a [commit](#def_commit). Those snapshots aren’t necessarily all arranged in a single line from oldest to newest; instead, work may simultaneously proceed along parallel lines of development, called [branches](#def_branch), which may merge and diverge. A single Git repository can track development on multiple branches. It does this by keeping a list of [heads](#def_head) which reference the latest commit on each branch; the [git-branch[1]](git-branch) command shows you the list of branch heads: ``` $ git branch * master ``` A freshly cloned repository contains a single branch head, by default named "master", with the working directory initialized to the state of the project referred to by that branch head. Most projects also use [tags](#def_tag). Tags, like heads, are references into the project’s history, and can be listed using the [git-tag[1]](git-tag) command: ``` $ git tag -l v2.6.11 v2.6.11-tree v2.6.12 v2.6.12-rc2 v2.6.12-rc3 v2.6.12-rc4 v2.6.12-rc5 v2.6.12-rc6 v2.6.13 ... ``` Tags are expected to always point at the same version of a project, while heads are expected to advance as development progresses. Create a new branch head pointing to one of these versions and check it out using [git-switch[1]](git-switch): ``` $ git switch -c new v2.6.13 ``` The working directory then reflects the contents that the project had when it was tagged v2.6.13, and [git-branch[1]](git-branch) shows two branches, with an asterisk marking the currently checked-out branch: ``` $ git branch master * new ``` If you decide that you’d rather see version 2.6.17, you can modify the current branch to point at v2.6.17 instead, with ``` $ git reset --hard v2.6.17 ``` Note that if the current branch head was your only reference to a particular point in history, then resetting that branch may leave you with no way to find the history it used to point to; so use this command carefully. ### Understanding History: Commits Every change in the history of a project is represented by a commit. The [git-show[1]](git-show) command shows the most recent commit on the current branch: ``` $ git show commit 17cf781661e6d38f737f15f53ab552f1e95960d7 Author: Linus Torvalds <[email protected].(none)> Date: Tue Apr 19 14:11:06 2005 -0700 Remove duplicate getenv(DB_ENVIRONMENT) call Noted by Tony Luck. diff --git a/init-db.c b/init-db.c index 65898fa..b002dc6 100644 --- a/init-db.c +++ b/init-db.c @@ -7,7 +7,7 @@ int main(int argc, char **argv) { - char *sha1_dir = getenv(DB_ENVIRONMENT), *path; + char *sha1_dir, *path; int len, i; if (mkdir(".git", 0755) < 0) { ``` As you can see, a commit shows who made the latest change, what they did, and why. Every commit has a 40-hexdigit id, sometimes called the "object name" or the "SHA-1 id", shown on the first line of the `git show` output. You can usually refer to a commit by a shorter name, such as a tag or a branch name, but this longer name can also be useful. Most importantly, it is a globally unique name for this commit: so if you tell somebody else the object name (for example in email), then you are guaranteed that name will refer to the same commit in their repository that it does in yours (assuming their repository has that commit at all). Since the object name is computed as a hash over the contents of the commit, you are guaranteed that the commit can never change without its name also changing. In fact, in [Git concepts](#git-concepts) we shall see that everything stored in Git history, including file data and directory contents, is stored in an object with a name that is a hash of its contents. #### Understanding history: commits, parents, and reachability Every commit (except the very first commit in a project) also has a parent commit which shows what happened before this commit. Following the chain of parents will eventually take you back to the beginning of the project. However, the commits do not form a simple list; Git allows lines of development to diverge and then reconverge, and the point where two lines of development reconverge is called a "merge". The commit representing a merge can therefore have more than one parent, with each parent representing the most recent commit on one of the lines of development leading to that point. The best way to see how this works is using the [gitk[1]](gitk) command; running gitk now on a Git repository and looking for merge commits will help understand how Git organizes history. In the following, we say that commit X is "reachable" from commit Y if commit X is an ancestor of commit Y. Equivalently, you could say that Y is a descendant of X, or that there is a chain of parents leading from commit Y to commit X. #### Understanding history: History diagrams We will sometimes represent Git history using diagrams like the one below. Commits are shown as "o", and the links between them with lines drawn with - / and \. Time goes left to right: ``` o--o--o <-- Branch A / o--o--o <-- master \ o--o--o <-- Branch B ``` If we need to talk about a particular commit, the character "o" may be replaced with another letter or number. #### Understanding history: What is a branch? When we need to be precise, we will use the word "branch" to mean a line of development, and "branch head" (or just "head") to mean a reference to the most recent commit on a branch. In the example above, the branch head named "A" is a pointer to one particular commit, but we refer to the line of three commits leading up to that point as all being part of "branch A". However, when no confusion will result, we often just use the term "branch" both for branches and for branch heads. ### Manipulating branches Creating, deleting, and modifying branches is quick and easy; here’s a summary of the commands: `git branch` list all branches. `git branch <branch>` create a new branch named `<branch>`, referencing the same point in history as the current branch. `git branch <branch> <start-point>` create a new branch named `<branch>`, referencing `<start-point>`, which may be specified any way you like, including using a branch name or a tag name. `git branch -d <branch>` delete the branch `<branch>`; if the branch is not fully merged in its upstream branch or contained in the current branch, this command will fail with a warning. `git branch -D <branch>` delete the branch `<branch>` irrespective of its merged status. `git switch <branch>` make the current branch `<branch>`, updating the working directory to reflect the version referenced by `<branch>`. `git switch -c <new> <start-point>` create a new branch `<new>` referencing `<start-point>`, and check it out. The special symbol "HEAD" can always be used to refer to the current branch. In fact, Git uses a file named `HEAD` in the `.git` directory to remember which branch is current: ``` $ cat .git/HEAD ref: refs/heads/master ``` ### Examining an old version without creating a new branch The `git switch` command normally expects a branch head, but will also accept an arbitrary commit when invoked with --detach; for example, you can check out the commit referenced by a tag: ``` $ git switch --detach v2.6.17 Note: checking out 'v2.6.17'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another switch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command again. Example: git switch -c new_branch_name HEAD is now at 427abfa Linux v2.6.17 ``` The HEAD then refers to the SHA-1 of the commit instead of to a branch, and git branch shows that you are no longer on a branch: ``` $ cat .git/HEAD 427abfa28afedffadfca9dd8b067eb6d36bac53f $ git branch * (detached from v2.6.17) master ``` In this case we say that the HEAD is "detached". This is an easy way to check out a particular version without having to make up a name for the new branch. You can still create a new branch (or tag) for this version later if you decide to. ### Examining branches from a remote repository The "master" branch that was created at the time you cloned is a copy of the HEAD in the repository that you cloned from. That repository may also have had other branches, though, and your local repository keeps branches which track each of those remote branches, called remote-tracking branches, which you can view using the `-r` option to [git-branch[1]](git-branch): ``` $ git branch -r origin/HEAD origin/html origin/maint origin/man origin/master origin/next origin/seen origin/todo ``` In this example, "origin" is called a remote repository, or "remote" for short. The branches of this repository are called "remote branches" from our point of view. The remote-tracking branches listed above were created based on the remote branches at clone time and will be updated by `git fetch` (hence `git pull`) and `git push`. See [Updating a repository with git fetch](#Updating-a-repository-With-git-fetch) for details. You might want to build on one of these remote-tracking branches on a branch of your own, just as you would for a tag: ``` $ git switch -c my-todo-copy origin/todo ``` You can also check out `origin/todo` directly to examine it or write a one-off patch. See [detached head](#detached-head). Note that the name "origin" is just the name that Git uses by default to refer to the repository that you cloned from. ### Naming branches, tags, and other references Branches, remote-tracking branches, and tags are all references to commits. All references are named with a slash-separated path name starting with `refs`; the names we’ve been using so far are actually shorthand: * The branch `test` is short for `refs/heads/test`. * The tag `v2.6.18` is short for `refs/tags/v2.6.18`. * `origin/master` is short for `refs/remotes/origin/master`. The full name is occasionally useful if, for example, there ever exists a tag and a branch with the same name. (Newly created refs are actually stored in the `.git/refs` directory, under the path given by their name. However, for efficiency reasons they may also be packed together in a single file; see [git-pack-refs[1]](git-pack-refs)). As another useful shortcut, the "HEAD" of a repository can be referred to just using the name of that repository. So, for example, "origin" is usually a shortcut for the HEAD branch in the repository "origin". For the complete list of paths which Git checks for references, and the order it uses to decide which to choose when there are multiple references with the same shorthand name, see the "SPECIFYING REVISIONS" section of [gitrevisions[7]](gitrevisions). ### Updating a repository with git fetch After you clone a repository and commit a few changes of your own, you may wish to check the original repository for updates. The `git-fetch` command, with no arguments, will update all of the remote-tracking branches to the latest version found in the original repository. It will not touch any of your own branches—​not even the "master" branch that was created for you on clone. ### Fetching branches from other repositories You can also track branches from repositories other than the one you cloned from, using [git-remote[1]](git-remote): ``` $ git remote add staging git://git.kernel.org/.../gregkh/staging.git $ git fetch staging ... From git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging * [new branch] master -> staging/master * [new branch] staging-linus -> staging/staging-linus * [new branch] staging-next -> staging/staging-next ``` New remote-tracking branches will be stored under the shorthand name that you gave `git remote add`, in this case `staging`: ``` $ git branch -r origin/HEAD -> origin/master origin/master staging/master staging/staging-linus staging/staging-next ``` If you run `git fetch <remote>` later, the remote-tracking branches for the named `<remote>` will be updated. If you examine the file `.git/config`, you will see that Git has added a new stanza: ``` $ cat .git/config ... [remote "staging"] url = git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git fetch = +refs/heads/*:refs/remotes/staging/* ... ``` This is what causes Git to track the remote’s branches; you may modify or delete these configuration options by editing `.git/config` with a text editor. (See the "CONFIGURATION FILE" section of [git-config[1]](git-config) for details.) Exploring git history --------------------- Git is best thought of as a tool for storing the history of a collection of files. It does this by storing compressed snapshots of the contents of a file hierarchy, together with "commits" which show the relationships between these snapshots. Git provides extremely flexible and fast tools for exploring the history of a project. We start with one specialized tool that is useful for finding the commit that introduced a bug into a project. ### How to use bisect to find a regression Suppose version 2.6.18 of your project worked, but the version at "master" crashes. Sometimes the best way to find the cause of such a regression is to perform a brute-force search through the project’s history to find the particular commit that caused the problem. The [git-bisect[1]](git-bisect) command can help you do this: ``` $ git bisect start $ git bisect good v2.6.18 $ git bisect bad master Bisecting: 3537 revisions left to test after this [65934a9a028b88e83e2b0f8b36618fe503349f8e] BLOCK: Make USB storage depend on SCSI rather than selecting it [try #6] ``` If you run `git branch` at this point, you’ll see that Git has temporarily moved you in "(no branch)". HEAD is now detached from any branch and points directly to a commit (with commit id 65934) that is reachable from "master" but not from v2.6.18. Compile and test it, and see whether it crashes. Assume it does crash. Then: ``` $ git bisect bad Bisecting: 1769 revisions left to test after this [7eff82c8b1511017ae605f0c99ac275a7e21b867] i2c-core: Drop useless bitmaskings ``` checks out an older version. Continue like this, telling Git at each stage whether the version it gives you is good or bad, and notice that the number of revisions left to test is cut approximately in half each time. After about 13 tests (in this case), it will output the commit id of the guilty commit. You can then examine the commit with [git-show[1]](git-show), find out who wrote it, and mail them your bug report with the commit id. Finally, run ``` $ git bisect reset ``` to return you to the branch you were on before. Note that the version which `git bisect` checks out for you at each point is just a suggestion, and you’re free to try a different version if you think it would be a good idea. For example, occasionally you may land on a commit that broke something unrelated; run ``` $ git bisect visualize ``` which will run gitk and label the commit it chose with a marker that says "bisect". Choose a safe-looking commit nearby, note its commit id, and check it out with: ``` $ git reset --hard fb47ddb2db ``` then test, run `bisect good` or `bisect bad` as appropriate, and continue. Instead of `git bisect visualize` and then `git reset --hard fb47ddb2db`, you might just want to tell Git that you want to skip the current commit: ``` $ git bisect skip ``` In this case, though, Git may not eventually be able to tell the first bad one between some first skipped commits and a later bad commit. There are also ways to automate the bisecting process if you have a test script that can tell a good from a bad commit. See [git-bisect[1]](git-bisect) for more information about this and other `git bisect` features. ### Naming commits We have seen several ways of naming commits already: * 40-hexdigit object name * branch name: refers to the commit at the head of the given branch * tag name: refers to the commit pointed to by the given tag (we’ve seen branches and tags are special cases of [references](#how-git-stores-references)). * HEAD: refers to the head of the current branch There are many more; see the "SPECIFYING REVISIONS" section of the [gitrevisions[7]](gitrevisions) man page for the complete list of ways to name revisions. Some examples: ``` $ git show fb47ddb2 # the first few characters of the object name # are usually enough to specify it uniquely $ git show HEAD^ # the parent of the HEAD commit $ git show HEAD^^ # the grandparent $ git show HEAD~4 # the great-great-grandparent ``` Recall that merge commits may have more than one parent; by default, `^` and `~` follow the first parent listed in the commit, but you can also choose: ``` $ git show HEAD^1 # show the first parent of HEAD $ git show HEAD^2 # show the second parent of HEAD ``` In addition to HEAD, there are several other special names for commits: Merges (to be discussed later), as well as operations such as `git reset`, which change the currently checked-out commit, generally set ORIG\_HEAD to the value HEAD had before the current operation. The `git fetch` operation always stores the head of the last fetched branch in FETCH\_HEAD. For example, if you run `git fetch` without specifying a local branch as the target of the operation ``` $ git fetch git://example.com/proj.git theirbranch ``` the fetched commits will still be available from FETCH\_HEAD. When we discuss merges we’ll also see the special name MERGE\_HEAD, which refers to the other branch that we’re merging in to the current branch. The [git-rev-parse[1]](git-rev-parse) command is a low-level command that is occasionally useful for translating some name for a commit to the object name for that commit: ``` $ git rev-parse origin e05db0fd4f31dde7005f075a84f96b360d05984b ``` ### Creating tags We can also create a tag to refer to a particular commit; after running ``` $ git tag stable-1 1b2e1d63ff ``` You can use `stable-1` to refer to the commit 1b2e1d63ff. This creates a "lightweight" tag. If you would also like to include a comment with the tag, and possibly sign it cryptographically, then you should create a tag object instead; see the [git-tag[1]](git-tag) man page for details. ### Browsing revisions The [git-log[1]](git-log) command can show lists of commits. On its own, it shows all commits reachable from the parent commit; but you can also make more specific requests: ``` $ git log v2.5.. # commits since (not reachable from) v2.5 $ git log test..master # commits reachable from master but not test $ git log master..test # ...reachable from test but not master $ git log master...test # ...reachable from either test or master, # but not both $ git log --since="2 weeks ago" # commits from the last 2 weeks $ git log Makefile # commits which modify Makefile $ git log fs/ # ... which modify any file under fs/ $ git log -S'foo()' # commits which add or remove any file data # matching the string 'foo()' ``` And of course you can combine all of these; the following finds commits since v2.5 which touch the `Makefile` or any file under `fs`: ``` $ git log v2.5.. Makefile fs/ ``` You can also ask git log to show patches: ``` $ git log -p ``` See the `--pretty` option in the [git-log[1]](git-log) man page for more display options. Note that git log starts with the most recent commit and works backwards through the parents; however, since Git history can contain multiple independent lines of development, the particular order that commits are listed in may be somewhat arbitrary. ### Generating diffs You can generate diffs between any two versions using [git-diff[1]](git-diff): ``` $ git diff master..test ``` That will produce the diff between the tips of the two branches. If you’d prefer to find the diff from their common ancestor to test, you can use three dots instead of two: ``` $ git diff master...test ``` Sometimes what you want instead is a set of patches; for this you can use [git-format-patch[1]](git-format-patch): ``` $ git format-patch master..test ``` will generate a file with a patch for each commit reachable from test but not from master. ### Viewing old file versions You can always view an old version of a file by just checking out the correct revision first. But sometimes it is more convenient to be able to view an old version of a single file without checking anything out; this command does that: ``` $ git show v2.5:fs/locks.c ``` Before the colon may be anything that names a commit, and after it may be any path to a file tracked by Git. ### Examples #### Counting the number of commits on a branch Suppose you want to know how many commits you’ve made on `mybranch` since it diverged from `origin`: ``` $ git log --pretty=oneline origin..mybranch | wc -l ``` Alternatively, you may often see this sort of thing done with the lower-level command [git-rev-list[1]](git-rev-list), which just lists the SHA-1’s of all the given commits: ``` $ git rev-list origin..mybranch | wc -l ``` #### Check whether two branches point at the same history Suppose you want to check whether two branches point at the same point in history. ``` $ git diff origin..master ``` will tell you whether the contents of the project are the same at the two branches; in theory, however, it’s possible that the same project contents could have been arrived at by two different historical routes. You could compare the object names: ``` $ git rev-list origin e05db0fd4f31dde7005f075a84f96b360d05984b $ git rev-list master e05db0fd4f31dde7005f075a84f96b360d05984b ``` Or you could recall that the `...` operator selects all commits reachable from either one reference or the other but not both; so ``` $ git log origin...master ``` will return no commits when the two branches are equal. #### Find first tagged version including a given fix Suppose you know that the commit e05db0fd fixed a certain problem. You’d like to find the earliest tagged release that contains that fix. Of course, there may be more than one answer—​if the history branched after commit e05db0fd, then there could be multiple "earliest" tagged releases. You could just visually inspect the commits since e05db0fd: ``` $ gitk e05db0fd.. ``` or you can use [git-name-rev[1]](git-name-rev), which will give the commit a name based on any tag it finds pointing to one of the commit’s descendants: ``` $ git name-rev --tags e05db0fd e05db0fd tags/v1.5.0-rc1^0~23 ``` The [git-describe[1]](git-describe) command does the opposite, naming the revision using a tag on which the given commit is based: ``` $ git describe e05db0fd v1.5.0-rc0-260-ge05db0f ``` but that may sometimes help you guess which tags might come after the given commit. If you just want to verify whether a given tagged version contains a given commit, you could use [git-merge-base[1]](git-merge-base): ``` $ git merge-base e05db0fd v1.5.0-rc1 e05db0fd4f31dde7005f075a84f96b360d05984b ``` The merge-base command finds a common ancestor of the given commits, and always returns one or the other in the case where one is a descendant of the other; so the above output shows that e05db0fd actually is an ancestor of v1.5.0-rc1. Alternatively, note that ``` $ git log v1.5.0-rc1..e05db0fd ``` will produce empty output if and only if v1.5.0-rc1 includes e05db0fd, because it outputs only commits that are not reachable from v1.5.0-rc1. As yet another alternative, the [git-show-branch[1]](git-show-branch) command lists the commits reachable from its arguments with a display on the left-hand side that indicates which arguments that commit is reachable from. So, if you run something like ``` $ git show-branch e05db0fd v1.5.0-rc0 v1.5.0-rc1 v1.5.0-rc2 ! [e05db0fd] Fix warnings in sha1_file.c - use C99 printf format if available ! [v1.5.0-rc0] GIT v1.5.0 preview ! [v1.5.0-rc1] GIT v1.5.0-rc1 ! [v1.5.0-rc2] GIT v1.5.0-rc2 ... ``` then a line like ``` + ++ [e05db0fd] Fix warnings in sha1_file.c - use C99 printf format if available ``` shows that e05db0fd is reachable from itself, from v1.5.0-rc1, and from v1.5.0-rc2, and not from v1.5.0-rc0. #### Showing commits unique to a given branch Suppose you would like to see all the commits reachable from the branch head named `master` but not from any other head in your repository. We can list all the heads in this repository with [git-show-ref[1]](git-show-ref): ``` $ git show-ref --heads bf62196b5e363d73353a9dcf094c59595f3153b7 refs/heads/core-tutorial db768d5504c1bb46f63ee9d6e1772bd047e05bf9 refs/heads/maint a07157ac624b2524a059a3414e99f6f44bebc1e7 refs/heads/master 24dbc180ea14dc1aebe09f14c8ecf32010690627 refs/heads/tutorial-2 1e87486ae06626c2f31eaa63d26fc0fd646c8af2 refs/heads/tutorial-fixes ``` We can get just the branch-head names, and remove `master`, with the help of the standard utilities cut and grep: ``` $ git show-ref --heads | cut -d' ' -f2 | grep -v '^refs/heads/master' refs/heads/core-tutorial refs/heads/maint refs/heads/tutorial-2 refs/heads/tutorial-fixes ``` And then we can ask to see all the commits reachable from master but not from these other heads: ``` $ gitk master --not $( git show-ref --heads | cut -d' ' -f2 | grep -v '^refs/heads/master' ) ``` Obviously, endless variations are possible; for example, to see all commits reachable from some head but not from any tag in the repository: ``` $ gitk $( git show-ref --heads ) --not $( git show-ref --tags ) ``` (See [gitrevisions[7]](gitrevisions) for explanations of commit-selecting syntax such as `--not`.) #### Creating a changelog and tarball for a software release The [git-archive[1]](git-archive) command can create a tar or zip archive from any version of a project; for example: ``` $ git archive -o latest.tar.gz --prefix=project/ HEAD ``` will use HEAD to produce a gzipped tar archive in which each filename is preceded by `project/`. The output file format is inferred from the output file extension if possible, see [git-archive[1]](git-archive) for details. Versions of Git older than 1.7.7 don’t know about the `tar.gz` format, you’ll need to use gzip explicitly: ``` $ git archive --format=tar --prefix=project/ HEAD | gzip >latest.tar.gz ``` If you’re releasing a new version of a software project, you may want to simultaneously make a changelog to include in the release announcement. Linus Torvalds, for example, makes new kernel releases by tagging them, then running: ``` $ release-script 2.6.12 2.6.13-rc6 2.6.13-rc7 ``` where release-script is a shell script that looks like: ``` #!/bin/sh stable="$1" last="$2" new="$3" echo "# git tag v$new" echo "git archive --prefix=linux-$new/ v$new | gzip -9 > ../linux-$new.tar.gz" echo "git diff v$stable v$new | gzip -9 > ../patch-$new.gz" echo "git log --no-merges v$new ^v$last > ../ChangeLog-$new" echo "git shortlog --no-merges v$new ^v$last > ../ShortLog" echo "git diff --stat --summary -M v$last v$new > ../diffstat-$new" ``` and then he just cut-and-pastes the output commands after verifying that they look OK. #### Finding commits referencing a file with given content Somebody hands you a copy of a file, and asks which commits modified a file such that it contained the given content either before or after the commit. You can find out with this: ``` $ git log --raw --abbrev=40 --pretty=oneline | grep -B 1 `git hash-object filename` ``` Figuring out why this works is left as an exercise to the (advanced) student. The [git-log[1]](git-log), [git-diff-tree[1]](git-diff-tree), and [git-hash-object[1]](git-hash-object) man pages may prove helpful. Developing with git ------------------- ### Telling Git your name Before creating any commits, you should introduce yourself to Git. The easiest way to do so is to use [git-config[1]](git-config): ``` $ git config --global user.name 'Your Name Comes Here' $ git config --global user.email '[email protected]' ``` Which will add the following to a file named `.gitconfig` in your home directory: ``` [user] name = Your Name Comes Here email = [email protected] ``` See the "CONFIGURATION FILE" section of [git-config[1]](git-config) for details on the configuration file. The file is plain text, so you can also edit it with your favorite editor. ### Creating a new repository Creating a new repository from scratch is very easy: ``` $ mkdir project $ cd project $ git init ``` If you have some initial content (say, a tarball): ``` $ tar xzvf project.tar.gz $ cd project $ git init $ git add . # include everything below ./ in the first commit: $ git commit ``` ### How to make a commit Creating a new commit takes three steps: 1. Making some changes to the working directory using your favorite editor. 2. Telling Git about your changes. 3. Creating the commit using the content you told Git about in step 2. In practice, you can interleave and repeat steps 1 and 2 as many times as you want: in order to keep track of what you want committed at step 3, Git maintains a snapshot of the tree’s contents in a special staging area called "the index." At the beginning, the content of the index will be identical to that of the HEAD. The command `git diff --cached`, which shows the difference between the HEAD and the index, should therefore produce no output at that point. Modifying the index is easy: To update the index with the contents of a new or modified file, use ``` $ git add path/to/file ``` To remove a file from the index and from the working tree, use ``` $ git rm path/to/file ``` After each step you can verify that ``` $ git diff --cached ``` always shows the difference between the HEAD and the index file—​this is what you’d commit if you created the commit now—​and that ``` $ git diff ``` shows the difference between the working tree and the index file. Note that `git add` always adds just the current contents of a file to the index; further changes to the same file will be ignored unless you run `git add` on the file again. When you’re ready, just run ``` $ git commit ``` and Git will prompt you for a commit message and then create the new commit. Check to make sure it looks like what you expected with ``` $ git show ``` As a special shortcut, ``` $ git commit -a ``` will update the index with any files that you’ve modified or removed and create a commit, all in one step. A number of commands are useful for keeping track of what you’re about to commit: ``` $ git diff --cached # difference between HEAD and the index; what # would be committed if you ran "commit" now. $ git diff # difference between the index file and your # working directory; changes that would not # be included if you ran "commit" now. $ git diff HEAD # difference between HEAD and working tree; what # would be committed if you ran "commit -a" now. $ git status # a brief per-file summary of the above. ``` You can also use [git-gui[1]](git-gui) to create commits, view changes in the index and the working tree files, and individually select diff hunks for inclusion in the index (by right-clicking on the diff hunk and choosing "Stage Hunk For Commit"). ### Creating good commit messages Though not required, it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description. The text up to the first blank line in a commit message is treated as the commit title, and that title is used throughout Git. For example, [git-format-patch[1]](git-format-patch) turns a commit into email, and it uses the title on the Subject line and the rest of the commit in the body. ### Ignoring files A project will often generate files that you do `not` want to track with Git. This typically includes files generated by a build process or temporary backup files made by your editor. Of course, `not` tracking files with Git is just a matter of `not` calling `git add` on them. But it quickly becomes annoying to have these untracked files lying around; e.g. they make `git add .` practically useless, and they keep showing up in the output of `git status`. You can tell Git to ignore certain files by creating a file called `.gitignore` in the top level of your working directory, with contents such as: ``` # Lines starting with '#' are considered comments. # Ignore any file named foo.txt. foo.txt # Ignore (generated) html files, *.html # except foo.html which is maintained by hand. !foo.html # Ignore objects and archives. *.[oa] ``` See [gitignore[5]](gitignore) for a detailed explanation of the syntax. You can also place .gitignore files in other directories in your working tree, and they will apply to those directories and their subdirectories. The `.gitignore` files can be added to your repository like any other files (just run `git add .gitignore` and `git commit`, as usual), which is convenient when the exclude patterns (such as patterns matching build output files) would also make sense for other users who clone your repository. If you wish the exclude patterns to affect only certain repositories (instead of every repository for a given project), you may instead put them in a file in your repository named `.git/info/exclude`, or in any file specified by the `core.excludesFile` configuration variable. Some Git commands can also take exclude patterns directly on the command line. See [gitignore[5]](gitignore) for the details. ### How to merge You can rejoin two diverging branches of development using [git-merge[1]](git-merge): ``` $ git merge branchname ``` merges the development in the branch `branchname` into the current branch. A merge is made by combining the changes made in `branchname` and the changes made up to the latest commit in your current branch since their histories forked. The work tree is overwritten by the result of the merge when this combining is done cleanly, or overwritten by a half-merged results when this combining results in conflicts. Therefore, if you have uncommitted changes touching the same files as the ones impacted by the merge, Git will refuse to proceed. Most of the time, you will want to commit your changes before you can merge, and if you don’t, then [git-stash[1]](git-stash) can take these changes away while you’re doing the merge, and reapply them afterwards. If the changes are independent enough, Git will automatically complete the merge and commit the result (or reuse an existing commit in case of [fast-forward](#fast-forwards), see below). On the other hand, if there are conflicts—​for example, if the same file is modified in two different ways in the remote branch and the local branch—​then you are warned; the output may look something like this: ``` $ git merge next 100% (4/4) done Auto-merged file.txt CONFLICT (content): Merge conflict in file.txt Automatic merge failed; fix conflicts and then commit the result. ``` Conflict markers are left in the problematic files, and after you resolve the conflicts manually, you can update the index with the contents and run Git commit, as you normally would when creating a new file. If you examine the resulting commit using gitk, you will see that it has two parents, one pointing to the top of the current branch, and one to the top of the other branch. ### Resolving a merge When a merge isn’t resolved automatically, Git leaves the index and the working tree in a special state that gives you all the information you need to help resolve the merge. Files with conflicts are marked specially in the index, so until you resolve the problem and update the index, [git-commit[1]](git-commit) will fail: ``` $ git commit file.txt: needs merge ``` Also, [git-status[1]](git-status) will list those files as "unmerged", and the files with conflicts will have conflict markers added, like this: ``` <<<<<<< HEAD:file.txt Hello world ======= Goodbye >>>>>>> 77976da35a11db4580b80ae27e8d65caf5208086:file.txt ``` All you need to do is edit the files to resolve the conflicts, and then ``` $ git add file.txt $ git commit ``` Note that the commit message will already be filled in for you with some information about the merge. Normally you can just use this default message unchanged, but you may add additional commentary of your own if desired. The above is all you need to know to resolve a simple merge. But Git also provides more information to help resolve conflicts: #### Getting conflict-resolution help during a merge All of the changes that Git was able to merge automatically are already added to the index file, so [git-diff[1]](git-diff) shows only the conflicts. It uses an unusual syntax: ``` $ git diff diff --cc file.txt index 802992c,2b60207..0000000 --- a/file.txt +++ b/file.txt @@@ -1,1 -1,1 +1,5 @@@ ++<<<<<<< HEAD:file.txt +Hello world ++======= + Goodbye ++>>>>>>> 77976da35a11db4580b80ae27e8d65caf5208086:file.txt ``` Recall that the commit which will be committed after we resolve this conflict will have two parents instead of the usual one: one parent will be HEAD, the tip of the current branch; the other will be the tip of the other branch, which is stored temporarily in MERGE\_HEAD. During the merge, the index holds three versions of each file. Each of these three "file stages" represents a different version of the file: ``` $ git show :1:file.txt # the file in a common ancestor of both branches $ git show :2:file.txt # the version from HEAD. $ git show :3:file.txt # the version from MERGE_HEAD. ``` When you ask [git-diff[1]](git-diff) to show the conflicts, it runs a three-way diff between the conflicted merge results in the work tree with stages 2 and 3 to show only hunks whose contents come from both sides, mixed (in other words, when a hunk’s merge results come only from stage 2, that part is not conflicting and is not shown. Same for stage 3). The diff above shows the differences between the working-tree version of file.txt and the stage 2 and stage 3 versions. So instead of preceding each line by a single `+` or `-`, it now uses two columns: the first column is used for differences between the first parent and the working directory copy, and the second for differences between the second parent and the working directory copy. (See the "COMBINED DIFF FORMAT" section of [git-diff-files[1]](git-diff-files) for a details of the format.) After resolving the conflict in the obvious way (but before updating the index), the diff will look like: ``` $ git diff diff --cc file.txt index 802992c,2b60207..0000000 --- a/file.txt +++ b/file.txt @@@ -1,1 -1,1 +1,1 @@@ - Hello world -Goodbye ++Goodbye world ``` This shows that our resolved version deleted "Hello world" from the first parent, deleted "Goodbye" from the second parent, and added "Goodbye world", which was previously absent from both. Some special diff options allow diffing the working directory against any of these stages: ``` $ git diff -1 file.txt # diff against stage 1 $ git diff --base file.txt # same as the above $ git diff -2 file.txt # diff against stage 2 $ git diff --ours file.txt # same as the above $ git diff -3 file.txt # diff against stage 3 $ git diff --theirs file.txt # same as the above. ``` The [git-log[1]](git-log) and [gitk[1]](gitk) commands also provide special help for merges: ``` $ git log --merge $ gitk --merge ``` These will display all commits which exist only on HEAD or on MERGE\_HEAD, and which touch an unmerged file. You may also use [git-mergetool[1]](git-mergetool), which lets you merge the unmerged files using external tools such as Emacs or kdiff3. Each time you resolve the conflicts in a file and update the index: ``` $ git add file.txt ``` the different stages of that file will be "collapsed", after which `git diff` will (by default) no longer show diffs for that file. ### Undoing a merge If you get stuck and decide to just give up and throw the whole mess away, you can always return to the pre-merge state with ``` $ git merge --abort ``` Or, if you’ve already committed the merge that you want to throw away, ``` $ git reset --hard ORIG_HEAD ``` However, this last command can be dangerous in some cases—​never throw away a commit you have already committed if that commit may itself have been merged into another branch, as doing so may confuse further merges. ### Fast-forward merges There is one special case not mentioned above, which is treated differently. Normally, a merge results in a merge commit, with two parents, one pointing at each of the two lines of development that were merged. However, if the current branch is an ancestor of the other—​so every commit present in the current branch is already contained in the other branch—​then Git just performs a "fast-forward"; the head of the current branch is moved forward to point at the head of the merged-in branch, without any new commits being created. ### Fixing mistakes If you’ve messed up the working tree, but haven’t yet committed your mistake, you can return the entire working tree to the last committed state with ``` $ git restore --staged --worktree :/ ``` If you make a commit that you later wish you hadn’t, there are two fundamentally different ways to fix the problem: 1. You can create a new commit that undoes whatever was done by the old commit. This is the correct thing if your mistake has already been made public. 2. You can go back and modify the old commit. You should never do this if you have already made the history public; Git does not normally expect the "history" of a project to change, and cannot correctly perform repeated merges from a branch that has had its history changed. #### Fixing a mistake with a new commit Creating a new commit that reverts an earlier change is very easy; just pass the [git-revert[1]](git-revert) command a reference to the bad commit; for example, to revert the most recent commit: ``` $ git revert HEAD ``` This will create a new commit which undoes the change in HEAD. You will be given a chance to edit the commit message for the new commit. You can also revert an earlier change, for example, the next-to-last: ``` $ git revert HEAD^ ``` In this case Git will attempt to undo the old change while leaving intact any changes made since then. If more recent changes overlap with the changes to be reverted, then you will be asked to fix conflicts manually, just as in the case of [resolving a merge](#resolving-a-merge). #### Fixing a mistake by rewriting history If the problematic commit is the most recent commit, and you have not yet made that commit public, then you may just [destroy it using `git reset`](#undoing-a-merge). Alternatively, you can edit the working directory and update the index to fix your mistake, just as if you were going to [create a new commit](#how-to-make-a-commit), then run ``` $ git commit --amend ``` which will replace the old commit by a new commit incorporating your changes, giving you a chance to edit the old commit message first. Again, you should never do this to a commit that may already have been merged into another branch; use [git-revert[1]](git-revert) instead in that case. It is also possible to replace commits further back in the history, but this is an advanced topic to be left for [another chapter](#cleaning-up-history). #### Checking out an old version of a file In the process of undoing a previous bad change, you may find it useful to check out an older version of a particular file using [git-restore[1]](git-restore). The command ``` $ git restore --source=HEAD^ path/to/file ``` replaces path/to/file by the contents it had in the commit HEAD^, and also updates the index to match. It does not change branches. If you just want to look at an old version of the file, without modifying the working directory, you can do that with [git-show[1]](git-show): ``` $ git show HEAD^:path/to/file ``` which will display the given version of the file. #### Temporarily setting aside work in progress While you are in the middle of working on something complicated, you find an unrelated but obvious and trivial bug. You would like to fix it before continuing. You can use [git-stash[1]](git-stash) to save the current state of your work, and after fixing the bug (or, optionally after doing so on a different branch and then coming back), unstash the work-in-progress changes. ``` $ git stash push -m "work in progress for foo feature" ``` This command will save your changes away to the `stash`, and reset your working tree and the index to match the tip of your current branch. Then you can make your fix as usual. ``` ... edit and test ... $ git commit -a -m "blorpl: typofix" ``` After that, you can go back to what you were working on with `git stash pop`: ``` $ git stash pop ``` ### Ensuring good performance On large repositories, Git depends on compression to keep the history information from taking up too much space on disk or in memory. Some Git commands may automatically run [git-gc[1]](git-gc), so you don’t have to worry about running it manually. However, compressing a large repository may take a while, so you may want to call `gc` explicitly to avoid automatic compression kicking in when it is not convenient. ### Ensuring reliability #### Checking the repository for corruption The [git-fsck[1]](git-fsck) command runs a number of self-consistency checks on the repository, and reports on any problems. This may take some time. ``` $ git fsck dangling commit 7281251ddd2a61e38657c827739c57015671a6b3 dangling commit 2706a059f258c6b245f298dc4ff2ccd30ec21a63 dangling commit 13472b7c4b80851a1bc551779171dcb03655e9b5 dangling blob 218761f9d90712d37a9c5e36f406f92202db07eb dangling commit bf093535a34a4d35731aa2bd90fe6b176302f14f dangling commit 8e4bec7f2ddaa268bef999853c25755452100f8e dangling tree d50bb86186bf27b681d25af89d3b5b68382e4085 dangling tree b24c2473f1fd3d91352a624795be026d64c8841f ... ``` You will see informational messages on dangling objects. They are objects that still exist in the repository but are no longer referenced by any of your branches, and can (and will) be removed after a while with `gc`. You can run `git fsck --no-dangling` to suppress these messages, and still view real errors. #### Recovering lost changes ##### Reflogs Say you modify a branch with [`git reset --hard`](#fixing-mistakes), and then realize that the branch was the only reference you had to that point in history. Fortunately, Git also keeps a log, called a "reflog", of all the previous values of each branch. So in this case you can still find the old history using, for example, ``` $ git log master@{1} ``` This lists the commits reachable from the previous version of the `master` branch head. This syntax can be used with any Git command that accepts a commit, not just with `git log`. Some other examples: ``` $ git show master@{2} # See where the branch pointed 2, $ git show master@{3} # 3, ... changes ago. $ gitk master@{yesterday} # See where it pointed yesterday, $ gitk master@{"1 week ago"} # ... or last week $ git log --walk-reflogs master # show reflog entries for master ``` A separate reflog is kept for the HEAD, so ``` $ git show HEAD@{"1 week ago"} ``` will show what HEAD pointed to one week ago, not what the current branch pointed to one week ago. This allows you to see the history of what you’ve checked out. The reflogs are kept by default for 30 days, after which they may be pruned. See [git-reflog[1]](git-reflog) and [git-gc[1]](git-gc) to learn how to control this pruning, and see the "SPECIFYING REVISIONS" section of [gitrevisions[7]](gitrevisions) for details. Note that the reflog history is very different from normal Git history. While normal history is shared by every repository that works on the same project, the reflog history is not shared: it tells you only about how the branches in your local repository have changed over time. ##### Examining dangling objects In some situations the reflog may not be able to save you. For example, suppose you delete a branch, then realize you need the history it contained. The reflog is also deleted; however, if you have not yet pruned the repository, then you may still be able to find the lost commits in the dangling objects that `git fsck` reports. See [Dangling objects](#dangling-objects) for the details. ``` $ git fsck dangling commit 7281251ddd2a61e38657c827739c57015671a6b3 dangling commit 2706a059f258c6b245f298dc4ff2ccd30ec21a63 dangling commit 13472b7c4b80851a1bc551779171dcb03655e9b5 ... ``` You can examine one of those dangling commits with, for example, ``` $ gitk 7281251ddd --not --all ``` which does what it sounds like: it says that you want to see the commit history that is described by the dangling commit(s), but not the history that is described by all your existing branches and tags. Thus you get exactly the history reachable from that commit that is lost. (And notice that it might not be just one commit: we only report the "tip of the line" as being dangling, but there might be a whole deep and complex commit history that was dropped.) If you decide you want the history back, you can always create a new reference pointing to it, for example, a new branch: ``` $ git branch recovered-branch 7281251ddd ``` Other types of dangling objects (blobs and trees) are also possible, and dangling objects can arise in other situations. Sharing development with others ------------------------------- ### Getting updates with git pull After you clone a repository and commit a few changes of your own, you may wish to check the original repository for updates and merge them into your own work. We have already seen [how to keep remote-tracking branches up to date](#Updating-a-repository-With-git-fetch) with [git-fetch[1]](git-fetch), and how to merge two branches. So you can merge in changes from the original repository’s master branch with: ``` $ git fetch $ git merge origin/master ``` However, the [git-pull[1]](git-pull) command provides a way to do this in one step: ``` $ git pull origin master ``` In fact, if you have `master` checked out, then this branch has been configured by `git clone` to get changes from the HEAD branch of the origin repository. So often you can accomplish the above with just a simple ``` $ git pull ``` This command will fetch changes from the remote branches to your remote-tracking branches `origin/*`, and merge the default branch into the current branch. More generally, a branch that is created from a remote-tracking branch will pull by default from that branch. See the descriptions of the `branch.<name>.remote` and `branch.<name>.merge` options in [git-config[1]](git-config), and the discussion of the `--track` option in [git-checkout[1]](git-checkout), to learn how to control these defaults. In addition to saving you keystrokes, `git pull` also helps you by producing a default commit message documenting the branch and repository that you pulled from. (But note that no such commit will be created in the case of a [fast-forward](#fast-forwards); instead, your branch will just be updated to point to the latest commit from the upstream branch.) The `git pull` command can also be given `.` as the "remote" repository, in which case it just merges in a branch from the current repository; so the commands ``` $ git pull . branch $ git merge branch ``` are roughly equivalent. ### Submitting patches to a project If you just have a few changes, the simplest way to submit them may just be to send them as patches in email: First, use [git-format-patch[1]](git-format-patch); for example: ``` $ git format-patch origin ``` will produce a numbered series of files in the current directory, one for each patch in the current branch but not in `origin/HEAD`. `git format-patch` can include an initial "cover letter". You can insert commentary on individual patches after the three dash line which `format-patch` places after the commit message but before the patch itself. If you use `git notes` to track your cover letter material, `git format-patch --notes` will include the commit’s notes in a similar manner. You can then import these into your mail client and send them by hand. However, if you have a lot to send at once, you may prefer to use the [git-send-email[1]](git-send-email) script to automate the process. Consult the mailing list for your project first to determine their requirements for submitting patches. ### Importing patches to a project Git also provides a tool called [git-am[1]](git-am) (am stands for "apply mailbox"), for importing such an emailed series of patches. Just save all of the patch-containing messages, in order, into a single mailbox file, say `patches.mbox`, then run ``` $ git am -3 patches.mbox ``` Git will apply each patch in order; if any conflicts are found, it will stop, and you can fix the conflicts as described in "[Resolving a merge](#resolving-a-merge)". (The `-3` option tells Git to perform a merge; if you would prefer it just to abort and leave your tree and index untouched, you may omit that option.) Once the index is updated with the results of the conflict resolution, instead of creating a new commit, just run ``` $ git am --continue ``` and Git will create the commit for you and continue applying the remaining patches from the mailbox. The final result will be a series of commits, one for each patch in the original mailbox, with authorship and commit log message each taken from the message containing each patch. ### Public Git repositories Another way to submit changes to a project is to tell the maintainer of that project to pull the changes from your repository using [git-pull[1]](git-pull). In the section "[Getting updates with `git pull`](#getting-updates-With-git-pull)" we described this as a way to get updates from the "main" repository, but it works just as well in the other direction. If you and the maintainer both have accounts on the same machine, then you can just pull changes from each other’s repositories directly; commands that accept repository URLs as arguments will also accept a local directory name: ``` $ git clone /path/to/repository $ git pull /path/to/other/repository ``` or an ssh URL: ``` $ git clone ssh://yourhost/~you/repository ``` For projects with few developers, or for synchronizing a few private repositories, this may be all you need. However, the more common way to do this is to maintain a separate public repository (usually on a different host) for others to pull changes from. This is usually more convenient, and allows you to cleanly separate private work in progress from publicly visible work. You will continue to do your day-to-day work in your personal repository, but periodically "push" changes from your personal repository into your public repository, allowing other developers to pull from that repository. So the flow of changes, in a situation where there is one other developer with a public repository, looks like this: ``` you push your personal repo ------------------> your public repo ^ | | | | you pull | they pull | | | | | they push V their public repo <------------------- their repo ``` We explain how to do this in the following sections. #### Setting up a public repository Assume your personal repository is in the directory `~/proj`. We first create a new clone of the repository and tell `git daemon` that it is meant to be public: ``` $ git clone --bare ~/proj proj.git $ touch proj.git/git-daemon-export-ok ``` The resulting directory proj.git contains a "bare" git repository—​it is just the contents of the `.git` directory, without any files checked out around it. Next, copy `proj.git` to the server where you plan to host the public repository. You can use scp, rsync, or whatever is most convenient. #### Exporting a Git repository via the Git protocol This is the preferred method. If someone else administers the server, they should tell you what directory to put the repository in, and what `git://` URL it will appear at. You can then skip to the section "[Pushing changes to a public repository](#pushing-changes-to-a-public-repository)", below. Otherwise, all you need to do is start [git-daemon[1]](git-daemon); it will listen on port 9418. By default, it will allow access to any directory that looks like a Git directory and contains the magic file git-daemon-export-ok. Passing some directory paths as `git daemon` arguments will further restrict the exports to those paths. You can also run `git daemon` as an inetd service; see the [git-daemon[1]](git-daemon) man page for details. (See especially the examples section.) #### Exporting a git repository via HTTP The Git protocol gives better performance and reliability, but on a host with a web server set up, HTTP exports may be simpler to set up. All you need to do is place the newly created bare Git repository in a directory that is exported by the web server, and make some adjustments to give web clients some extra information they need: ``` $ mv proj.git /home/you/public_html/proj.git $ cd proj.git $ git --bare update-server-info $ mv hooks/post-update.sample hooks/post-update ``` (For an explanation of the last two lines, see [git-update-server-info[1]](git-update-server-info) and [githooks[5]](githooks).) Advertise the URL of `proj.git`. Anybody else should then be able to clone or pull from that URL, for example with a command line like: ``` $ git clone http://yourserver.com/~you/proj.git ``` (See also [setup-git-server-over-http](https://git-scm.com/docs/howto/setup-git-server-over-http) for a slightly more sophisticated setup using WebDAV which also allows pushing over HTTP.) #### Pushing changes to a public repository Note that the two techniques outlined above (exporting via [http](#exporting-via-http) or [git](#exporting-via-git)) allow other maintainers to fetch your latest changes, but they do not allow write access, which you will need to update the public repository with the latest changes created in your private repository. The simplest way to do this is using [git-push[1]](git-push) and ssh; to update the remote branch named `master` with the latest state of your branch named `master`, run ``` $ git push ssh://yourserver.com/~you/proj.git master:master ``` or just ``` $ git push ssh://yourserver.com/~you/proj.git master ``` As with `git fetch`, `git push` will complain if this does not result in a [fast-forward](#fast-forwards); see the following section for details on handling this case. Note that the target of a `push` is normally a [bare](#def_bare_repository) repository. You can also push to a repository that has a checked-out working tree, but a push to update the currently checked-out branch is denied by default to prevent confusion. See the description of the receive.denyCurrentBranch option in [git-config[1]](git-config) for details. As with `git fetch`, you may also set up configuration options to save typing; so, for example: ``` $ git remote add public-repo ssh://yourserver.com/~you/proj.git ``` adds the following to `.git/config`: ``` [remote "public-repo"] url = yourserver.com:proj.git fetch = +refs/heads/*:refs/remotes/example/* ``` which lets you do the same push with just ``` $ git push public-repo master ``` See the explanations of the `remote.<name>.url`, `branch.<name>.remote`, and `remote.<name>.push` options in [git-config[1]](git-config) for details. #### What to do when a push fails If a push would not result in a [fast-forward](#fast-forwards) of the remote branch, then it will fail with an error like: ``` ! [rejected] master -> master (non-fast-forward) error: failed to push some refs to '...' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. Integrate the remote changes (e.g. hint: 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. ``` This can happen, for example, if you: * use `git reset --hard` to remove already-published commits, or * use `git commit --amend` to replace already-published commits (as in [Fixing a mistake by rewriting history](#fixing-a-mistake-by-rewriting-history)), or * use `git rebase` to rebase any already-published commits (as in [Keeping a patch series up to date using git rebase](#using-git-rebase)). You may force `git push` to perform the update anyway by preceding the branch name with a plus sign: ``` $ git push ssh://yourserver.com/~you/proj.git +master ``` Note the addition of the `+` sign. Alternatively, you can use the `-f` flag to force the remote update, as in: ``` $ git push -f ssh://yourserver.com/~you/proj.git master ``` Normally whenever a branch head in a public repository is modified, it is modified to point to a descendant of the commit that it pointed to before. By forcing a push in this situation, you break that convention. (See [Problems with rewriting history](#problems-With-rewriting-history).) Nevertheless, this is a common practice for people that need a simple way to publish a work-in-progress patch series, and it is an acceptable compromise as long as you warn other developers that this is how you intend to manage the branch. It’s also possible for a push to fail in this way when other people have the right to push to the same repository. In that case, the correct solution is to retry the push after first updating your work: either by a pull, or by a fetch followed by a rebase; see the [next section](#setting-up-a-shared-repository) and [gitcvs-migration[7]](gitcvs-migration) for more. #### Setting up a shared repository Another way to collaborate is by using a model similar to that commonly used in CVS, where several developers with special rights all push to and pull from a single shared repository. See [gitcvs-migration[7]](gitcvs-migration) for instructions on how to set this up. However, while there is nothing wrong with Git’s support for shared repositories, this mode of operation is not generally recommended, simply because the mode of collaboration that Git supports—​by exchanging patches and pulling from public repositories—​has so many advantages over the central shared repository: * Git’s ability to quickly import and merge patches allows a single maintainer to process incoming changes even at very high rates. And when that becomes too much, `git pull` provides an easy way for that maintainer to delegate this job to other maintainers while still allowing optional review of incoming changes. * Since every developer’s repository has the same complete copy of the project history, no repository is special, and it is trivial for another developer to take over maintenance of a project, either by mutual agreement, or because a maintainer becomes unresponsive or difficult to work with. * The lack of a central group of "committers" means there is less need for formal decisions about who is "in" and who is "out". #### Allowing web browsing of a repository The gitweb cgi script provides users an easy way to browse your project’s revisions, file contents and logs without having to install Git. Features like RSS/Atom feeds and blame/annotation details may optionally be enabled. The [git-instaweb[1]](git-instaweb) command provides a simple way to start browsing the repository using gitweb. The default server when using instaweb is lighttpd. See the file gitweb/INSTALL in the Git source tree and [gitweb[1]](gitweb) for instructions on details setting up a permanent installation with a CGI or Perl capable server. ### How to get a Git repository with minimal history A [shallow clone](#def_shallow_clone), with its truncated history, is useful when one is interested only in recent history of a project and getting full history from the upstream is expensive. A [shallow clone](#def_shallow_clone) is created by specifying the [git-clone[1]](git-clone) `--depth` switch. The depth can later be changed with the [git-fetch[1]](git-fetch) `--depth` switch, or full history restored with `--unshallow`. Merging inside a [shallow clone](#def_shallow_clone) will work as long as a merge base is in the recent history. Otherwise, it will be like merging unrelated histories and may have to result in huge conflicts. This limitation may make such a repository unsuitable to be used in merge based workflows. ### Examples #### Maintaining topic branches for a Linux subsystem maintainer This describes how Tony Luck uses Git in his role as maintainer of the IA64 architecture for the Linux kernel. He uses two public branches: * A "test" tree into which patches are initially placed so that they can get some exposure when integrated with other ongoing development. This tree is available to Andrew for pulling into -mm whenever he wants. * A "release" tree into which tested patches are moved for final sanity checking, and as a vehicle to send them upstream to Linus (by sending him a "please pull" request.) He also uses a set of temporary branches ("topic branches"), each containing a logical grouping of patches. To set this up, first create your work tree by cloning Linus’s public tree: ``` $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git work $ cd work ``` Linus’s tree will be stored in the remote-tracking branch named origin/master, and can be updated using [git-fetch[1]](git-fetch); you can track other public trees using [git-remote[1]](git-remote) to set up a "remote" and [git-fetch[1]](git-fetch) to keep them up to date; see [Repositories and Branches](#repositories-and-branches). Now create the branches in which you are going to work; these start out at the current tip of origin/master branch, and should be set up (using the `--track` option to [git-branch[1]](git-branch)) to merge changes in from Linus by default. ``` $ git branch --track test origin/master $ git branch --track release origin/master ``` These can be easily kept up to date using [git-pull[1]](git-pull). ``` $ git switch test && git pull $ git switch release && git pull ``` Important note! If you have any local changes in these branches, then this merge will create a commit object in the history (with no local changes Git will simply do a "fast-forward" merge). Many people dislike the "noise" that this creates in the Linux history, so you should avoid doing this capriciously in the `release` branch, as these noisy commits will become part of the permanent history when you ask Linus to pull from the release branch. A few configuration variables (see [git-config[1]](git-config)) can make it easy to push both branches to your public tree. (See [Setting up a public repository](#setting-up-a-public-repository).) ``` $ cat >> .git/config <<EOF [remote "mytree"] url = master.kernel.org:/pub/scm/linux/kernel/git/aegl/linux.git push = release push = test EOF ``` Then you can push both the test and release trees using [git-push[1]](git-push): ``` $ git push mytree ``` or push just one of the test and release branches using: ``` $ git push mytree test ``` or ``` $ git push mytree release ``` Now to apply some patches from the community. Think of a short snappy name for a branch to hold this patch (or related group of patches), and create a new branch from a recent stable tag of Linus’s branch. Picking a stable base for your branch will: 1) help you: by avoiding inclusion of unrelated and perhaps lightly tested changes 2) help future bug hunters that use `git bisect` to find problems ``` $ git switch -c speed-up-spinlocks v2.6.35 ``` Now you apply the patch(es), run some tests, and commit the change(s). If the patch is a multi-part series, then you should apply each as a separate commit to this branch. ``` $ ... patch ... test ... commit [ ... patch ... test ... commit ]* ``` When you are happy with the state of this change, you can merge it into the "test" branch in preparation to make it public: ``` $ git switch test && git merge speed-up-spinlocks ``` It is unlikely that you would have any conflicts here …​ but you might if you spent a while on this step and had also pulled new versions from upstream. Sometime later when enough time has passed and testing done, you can pull the same branch into the `release` tree ready to go upstream. This is where you see the value of keeping each patch (or patch series) in its own branch. It means that the patches can be moved into the `release` tree in any order. ``` $ git switch release && git merge speed-up-spinlocks ``` After a while, you will have a number of branches, and despite the well chosen names you picked for each of them, you may forget what they are for, or what status they are in. To get a reminder of what changes are in a specific branch, use: ``` $ git log linux..branchname | git shortlog ``` To see whether it has already been merged into the test or release branches, use: ``` $ git log test..branchname ``` or ``` $ git log release..branchname ``` (If this branch has not yet been merged, you will see some log entries. If it has been merged, then there will be no output.) Once a patch completes the great cycle (moving from test to release, then pulled by Linus, and finally coming back into your local `origin/master` branch), the branch for this change is no longer needed. You detect this when the output from: ``` $ git log origin..branchname ``` is empty. At this point the branch can be deleted: ``` $ git branch -d branchname ``` Some changes are so trivial that it is not necessary to create a separate branch and then merge into each of the test and release branches. For these changes, just apply directly to the `release` branch, and then merge that into the `test` branch. After pushing your work to `mytree`, you can use [git-request-pull[1]](git-request-pull) to prepare a "please pull" request message to send to Linus: ``` $ git push mytree $ git request-pull origin mytree release ``` Here are some of the scripts that simplify all this even further. ``` ==== update script ==== # Update a branch in my Git tree. If the branch to be updated # is origin, then pull from kernel.org. Otherwise merge # origin/master branch into test|release branch case "$1" in test|release) git checkout $1 && git pull . origin ;; origin) before=$(git rev-parse refs/remotes/origin/master) git fetch origin after=$(git rev-parse refs/remotes/origin/master) if [ $before != $after ] then git log $before..$after | git shortlog fi ;; *) echo "usage: $0 origin|test|release" 1>&2 exit 1 ;; esac ``` ``` ==== merge script ==== # Merge a branch into either the test or release branch pname=$0 usage() { echo "usage: $pname branch test|release" 1>&2 exit 1 } git show-ref -q --verify -- refs/heads/"$1" || { echo "Can't see branch <$1>" 1>&2 usage } case "$2" in test|release) if [ $(git log $2..$1 | wc -c) -eq 0 ] then echo $1 already merged into $2 1>&2 exit 1 fi git checkout $2 && git pull . $1 ;; *) usage ;; esac ``` ``` ==== status script ==== # report on status of my ia64 Git tree gb=$(tput setab 2) rb=$(tput setab 1) restore=$(tput setab 9) if [ `git rev-list test..release | wc -c` -gt 0 ] then echo $rb Warning: commits in release that are not in test $restore git log test..release fi for branch in `git show-ref --heads | sed 's|^.*/||'` do if [ $branch = test -o $branch = release ] then continue fi echo -n $gb ======= $branch ====== $restore " " status= for ref in test release origin/master do if [ `git rev-list $ref..$branch | wc -c` -gt 0 ] then status=$status${ref:0:1} fi done case $status in trl) echo $rb Need to pull into test $restore ;; rl) echo "In test" ;; l) echo "Waiting for linus" ;; "") echo $rb All done $restore ;; *) echo $rb "<$status>" $restore ;; esac git log origin/master..$branch | git shortlog done ``` Rewriting history and maintaining patch series ---------------------------------------------- Normally commits are only added to a project, never taken away or replaced. Git is designed with this assumption, and violating it will cause Git’s merge machinery (for example) to do the wrong thing. However, there is a situation in which it can be useful to violate this assumption. ### Creating the perfect patch series Suppose you are a contributor to a large project, and you want to add a complicated feature, and to present it to the other developers in a way that makes it easy for them to read your changes, verify that they are correct, and understand why you made each change. If you present all of your changes as a single patch (or commit), they may find that it is too much to digest all at once. If you present them with the entire history of your work, complete with mistakes, corrections, and dead ends, they may be overwhelmed. So the ideal is usually to produce a series of patches such that: 1. Each patch can be applied in order. 2. Each patch includes a single logical change, together with a message explaining the change. 3. No patch introduces a regression: after applying any initial part of the series, the resulting project still compiles and works, and has no bugs that it didn’t have before. 4. The complete series produces the same end result as your own (probably much messier!) development process did. We will introduce some tools that can help you do this, explain how to use them, and then explain some of the problems that can arise because you are rewriting history. ### Keeping a patch series up to date using git rebase Suppose that you create a branch `mywork` on a remote-tracking branch `origin`, and create some commits on top of it: ``` $ git switch -c mywork origin $ vi file.txt $ git commit $ vi otherfile.txt $ git commit ... ``` You have performed no merges into mywork, so it is just a simple linear sequence of patches on top of `origin`: ``` o--o--O <-- origin \ a--b--c <-- mywork ``` Some more interesting work has been done in the upstream project, and `origin` has advanced: ``` o--o--O--o--o--o <-- origin \ a--b--c <-- mywork ``` At this point, you could use `pull` to merge your changes back in; the result would create a new merge commit, like this: ``` o--o--O--o--o--o <-- origin \ \ a--b--c--m <-- mywork ``` However, if you prefer to keep the history in mywork a simple series of commits without any merges, you may instead choose to use [git-rebase[1]](git-rebase): ``` $ git switch mywork $ git rebase origin ``` This will remove each of your commits from mywork, temporarily saving them as patches (in a directory named `.git/rebase-apply`), update mywork to point at the latest version of origin, then apply each of the saved patches to the new mywork. The result will look like: ``` o--o--O--o--o--o <-- origin \ a'--b'--c' <-- mywork ``` In the process, it may discover conflicts. In that case it will stop and allow you to fix the conflicts; after fixing conflicts, use `git add` to update the index with those contents, and then, instead of running `git commit`, just run ``` $ git rebase --continue ``` and Git will continue applying the rest of the patches. At any point you may use the `--abort` option to abort this process and return mywork to the state it had before you started the rebase: ``` $ git rebase --abort ``` If you need to reorder or edit a number of commits in a branch, it may be easier to use `git rebase -i`, which allows you to reorder and squash commits, as well as marking them for individual editing during the rebase. See [Using interactive rebases](#interactive-rebase) for details, and [Reordering or selecting from a patch series](#reordering-patch-series) for alternatives. ### Rewriting a single commit We saw in [Fixing a mistake by rewriting history](#fixing-a-mistake-by-rewriting-history) that you can replace the most recent commit using ``` $ git commit --amend ``` which will replace the old commit by a new commit incorporating your changes, giving you a chance to edit the old commit message first. This is useful for fixing typos in your last commit, or for adjusting the patch contents of a poorly staged commit. If you need to amend commits from deeper in your history, you can use [interactive rebase’s `edit` instruction](#interactive-rebase). ### Reordering or selecting from a patch series Sometimes you want to edit a commit deeper in your history. One approach is to use `git format-patch` to create a series of patches and then reset the state to before the patches: ``` $ git format-patch origin $ git reset --hard origin ``` Then modify, reorder, or eliminate patches as needed before applying them again with [git-am[1]](git-am): ``` $ git am *.patch ``` ### Using interactive rebases You can also edit a patch series with an interactive rebase. This is the same as [reordering a patch series using `format-patch`](#reordering-patch-series), so use whichever interface you like best. Rebase your current HEAD on the last commit you want to retain as-is. For example, if you want to reorder the last 5 commits, use: ``` $ git rebase -i HEAD~5 ``` This will open your editor with a list of steps to be taken to perform your rebase. ``` pick deadbee The oneline of this commit pick fa1afe1 The oneline of the next commit ... # Rebase c0ffeee..deadbee onto c0ffeee # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # x, exec = run command (the rest of the line) using shell # # These lines can be re-ordered; they are executed from top to bottom. # # If you remove a line here THAT COMMIT WILL BE LOST. # # However, if you remove everything, the rebase will be aborted. # # Note that empty commits are commented out ``` As explained in the comments, you can reorder commits, squash them together, edit commit messages, etc. by editing the list. Once you are satisfied, save the list and close your editor, and the rebase will begin. The rebase will stop where `pick` has been replaced with `edit` or when a step in the list fails to mechanically resolve conflicts and needs your help. When you are done editing and/or resolving conflicts you can continue with `git rebase --continue`. If you decide that things are getting too hairy, you can always bail out with `git rebase --abort`. Even after the rebase is complete, you can still recover the original branch by using the [reflog](#reflogs). For a more detailed discussion of the procedure and additional tips, see the "INTERACTIVE MODE" section of [git-rebase[1]](git-rebase). ### Other tools There are numerous other tools, such as StGit, which exist for the purpose of maintaining a patch series. These are outside of the scope of this manual. ### Problems with rewriting history The primary problem with rewriting the history of a branch has to do with merging. Suppose somebody fetches your branch and merges it into their branch, with a result something like this: ``` o--o--O--o--o--o <-- origin \ \ t--t--t--m <-- their branch: ``` Then suppose you modify the last three commits: ``` o--o--o <-- new head of origin / o--o--O--o--o--o <-- old head of origin ``` If we examined all this history together in one repository, it will look like: ``` o--o--o <-- new head of origin / o--o--O--o--o--o <-- old head of origin \ \ t--t--t--m <-- their branch: ``` Git has no way of knowing that the new head is an updated version of the old head; it treats this situation exactly the same as it would if two developers had independently done the work on the old and new heads in parallel. At this point, if someone attempts to merge the new head in to their branch, Git will attempt to merge together the two (old and new) lines of development, instead of trying to replace the old by the new. The results are likely to be unexpected. You may still choose to publish branches whose history is rewritten, and it may be useful for others to be able to fetch those branches in order to examine or test them, but they should not attempt to pull such branches into their own work. For true distributed development that supports proper merging, published branches should never be rewritten. ### Why bisecting merge commits can be harder than bisecting linear history The [git-bisect[1]](git-bisect) command correctly handles history that includes merge commits. However, when the commit that it finds is a merge commit, the user may need to work harder than usual to figure out why that commit introduced a problem. Imagine this history: ``` ---Z---o---X---...---o---A---C---D \ / o---o---Y---...---o---B ``` Suppose that on the upper line of development, the meaning of one of the functions that exists at Z is changed at commit X. The commits from Z leading to A change both the function’s implementation and all calling sites that exist at Z, as well as new calling sites they add, to be consistent. There is no bug at A. Suppose that in the meantime on the lower line of development somebody adds a new calling site for that function at commit Y. The commits from Z leading to B all assume the old semantics of that function and the callers and the callee are consistent with each other. There is no bug at B, either. Suppose further that the two development lines merge cleanly at C, so no conflict resolution is required. Nevertheless, the code at C is broken, because the callers added on the lower line of development have not been converted to the new semantics introduced on the upper line of development. So if all you know is that D is bad, that Z is good, and that [git-bisect[1]](git-bisect) identifies C as the culprit, how will you figure out that the problem is due to this change in semantics? When the result of a `git bisect` is a non-merge commit, you should normally be able to discover the problem by examining just that commit. Developers can make this easy by breaking their changes into small self-contained commits. That won’t help in the case above, however, because the problem isn’t obvious from examination of any single commit; instead, a global view of the development is required. To make matters worse, the change in semantics in the problematic function may be just one small part of the changes in the upper line of development. On the other hand, if instead of merging at C you had rebased the history between Z to B on top of A, you would have gotten this linear history: ``` ---Z---o---X--...---o---A---o---o---Y*--...---o---B*--D* ``` Bisecting between Z and D\* would hit a single culprit commit Y\*, and understanding why Y\* was broken would probably be easier. Partly for this reason, many experienced Git users, even when working on an otherwise merge-heavy project, keep the history linear by rebasing against the latest upstream version before publishing. Advanced branch management -------------------------- ### Fetching individual branches Instead of using [git-remote[1]](git-remote), you can also choose just to update one branch at a time, and to store it locally under an arbitrary name: ``` $ git fetch origin todo:my-todo-work ``` The first argument, `origin`, just tells Git to fetch from the repository you originally cloned from. The second argument tells Git to fetch the branch named `todo` from the remote repository, and to store it locally under the name `refs/heads/my-todo-work`. You can also fetch branches from other repositories; so ``` $ git fetch git://example.com/proj.git master:example-master ``` will create a new branch named `example-master` and store in it the branch named `master` from the repository at the given URL. If you already have a branch named example-master, it will attempt to [fast-forward](#fast-forwards) to the commit given by example.com’s master branch. In more detail: ### git fetch and fast-forwards In the previous example, when updating an existing branch, `git fetch` checks to make sure that the most recent commit on the remote branch is a descendant of the most recent commit on your copy of the branch before updating your copy of the branch to point at the new commit. Git calls this process a [fast-forward](#fast-forwards). A fast-forward looks something like this: ``` o--o--o--o <-- old head of the branch \ o--o--o <-- new head of the branch ``` In some cases it is possible that the new head will **not** actually be a descendant of the old head. For example, the developer may have realized a serious mistake was made and decided to backtrack, resulting in a situation like: ``` o--o--o--o--a--b <-- old head of the branch \ o--o--o <-- new head of the branch ``` In this case, `git fetch` will fail, and print out a warning. In that case, you can still force Git to update to the new head, as described in the following section. However, note that in the situation above this may mean losing the commits labeled `a` and `b`, unless you’ve already created a reference of your own pointing to them. ### Forcing git fetch to do non-fast-forward updates If git fetch fails because the new head of a branch is not a descendant of the old head, you may force the update with: ``` $ git fetch git://example.com/proj.git +master:refs/remotes/example/master ``` Note the addition of the `+` sign. Alternatively, you can use the `-f` flag to force updates of all the fetched branches, as in: ``` $ git fetch -f origin ``` Be aware that commits that the old version of example/master pointed at may be lost, as we saw in the previous section. ### Configuring remote-tracking branches We saw above that `origin` is just a shortcut to refer to the repository that you originally cloned from. This information is stored in Git configuration variables, which you can see using [git-config[1]](git-config): ``` $ git config -l core.repositoryformatversion=0 core.filemode=true core.logallrefupdates=true remote.origin.url=git://git.kernel.org/pub/scm/git/git.git remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* branch.master.remote=origin branch.master.merge=refs/heads/master ``` If there are other repositories that you also use frequently, you can create similar configuration options to save typing; for example, ``` $ git remote add example git://example.com/proj.git ``` adds the following to `.git/config`: ``` [remote "example"] url = git://example.com/proj.git fetch = +refs/heads/*:refs/remotes/example/* ``` Also note that the above configuration can be performed by directly editing the file `.git/config` instead of using [git-remote[1]](git-remote). After configuring the remote, the following three commands will do the same thing: ``` $ git fetch git://example.com/proj.git +refs/heads/*:refs/remotes/example/* $ git fetch example +refs/heads/*:refs/remotes/example/* $ git fetch example ``` See [git-config[1]](git-config) for more details on the configuration options mentioned above and [git-fetch[1]](git-fetch) for more details on the refspec syntax. Git concepts ------------ Git is built on a small number of simple but powerful ideas. While it is possible to get things done without understanding them, you will find Git much more intuitive if you do. We start with the most important, the [object database](#def_object_database) and the [index](#def_index). ### The Object Database We already saw in [Understanding History: Commits](#understanding-commits) that all commits are stored under a 40-digit "object name". In fact, all the information needed to represent the history of a project is stored in objects with such names. In each case the name is calculated by taking the SHA-1 hash of the contents of the object. The SHA-1 hash is a cryptographic hash function. What that means to us is that it is impossible to find two different objects with the same name. This has a number of advantages; among others: * Git can quickly determine whether two objects are identical or not, just by comparing names. * Since object names are computed the same way in every repository, the same content stored in two repositories will always be stored under the same name. * Git can detect errors when it reads an object, by checking that the object’s name is still the SHA-1 hash of its contents. (See [Object storage format](#object-details) for the details of the object formatting and SHA-1 calculation.) There are four different types of objects: "blob", "tree", "commit", and "tag". * A ["blob" object](#def_blob_object) is used to store file data. * A ["tree" object](#def_tree_object) ties one or more "blob" objects into a directory structure. In addition, a tree object can refer to other tree objects, thus creating a directory hierarchy. * A ["commit" object](#def_commit_object) ties such directory hierarchies together into a [directed acyclic graph](#def_DAG) of revisions—​each commit contains the object name of exactly one tree designating the directory hierarchy at the time of the commit. In addition, a commit refers to "parent" commit objects that describe the history of how we arrived at that directory hierarchy. * A ["tag" object](#def_tag_object) symbolically identifies and can be used to sign other objects. It contains the object name and type of another object, a symbolic name (of course!) and, optionally, a signature. The object types in some more detail: #### Commit Object The "commit" object links a physical state of a tree with a description of how we got there and why. Use the `--pretty=raw` option to [git-show[1]](git-show) or [git-log[1]](git-log) to examine your favorite commit: ``` $ git show -s --pretty=raw 2be7fcb476 commit 2be7fcb4764f2dbcee52635b91fedb1b3dcf7ab4 tree fb3a8bdd0ceddd019615af4d57a53f43d8cee2bf parent 257a84d9d02e90447b149af58b271c19405edb6a author Dave Watson <[email protected]> 1187576872 -0400 committer Junio C Hamano <[email protected]> 1187591163 -0700 Fix misspelling of 'suppress' in docs Signed-off-by: Junio C Hamano <[email protected]> ``` As you can see, a commit is defined by: * a tree: The SHA-1 name of a tree object (as defined below), representing the contents of a directory at a certain point in time. * parent(s): The SHA-1 name(s) of some number of commits which represent the immediately previous step(s) in the history of the project. The example above has one parent; merge commits may have more than one. A commit with no parents is called a "root" commit, and represents the initial revision of a project. Each project must have at least one root. A project can also have multiple roots, though that isn’t common (or necessarily a good idea). * an author: The name of the person responsible for this change, together with its date. * a committer: The name of the person who actually created the commit, with the date it was done. This may be different from the author, for example, if the author was someone who wrote a patch and emailed it to the person who used it to create the commit. * a comment describing this commit. Note that a commit does not itself contain any information about what actually changed; all changes are calculated by comparing the contents of the tree referred to by this commit with the trees associated with its parents. In particular, Git does not attempt to record file renames explicitly, though it can identify cases where the existence of the same file data at changing paths suggests a rename. (See, for example, the `-M` option to [git-diff[1]](git-diff)). A commit is usually created by [git-commit[1]](git-commit), which creates a commit whose parent is normally the current HEAD, and whose tree is taken from the content currently stored in the index. #### Tree Object The ever-versatile [git-show[1]](git-show) command can also be used to examine tree objects, but [git-ls-tree[1]](git-ls-tree) will give you more details: ``` $ git ls-tree fb3a8bdd0ce 100644 blob 63c918c667fa005ff12ad89437f2fdc80926e21c .gitignore 100644 blob 5529b198e8d14decbe4ad99db3f7fb632de0439d .mailmap 100644 blob 6ff87c4664981e4397625791c8ea3bbb5f2279a3 COPYING 040000 tree 2fb783e477100ce076f6bf57e4a6f026013dc745 Documentation 100755 blob 3c0032cec592a765692234f1cba47dfdcc3a9200 GIT-VERSION-GEN 100644 blob 289b046a443c0647624607d471289b2c7dcd470b INSTALL 100644 blob 4eb463797adc693dc168b926b6932ff53f17d0b1 Makefile 100644 blob 548142c327a6790ff8821d67c2ee1eff7a656b52 README ... ``` As you can see, a tree object contains a list of entries, each with a mode, object type, SHA-1 name, and name, sorted by name. It represents the contents of a single directory tree. The object type may be a blob, representing the contents of a file, or another tree, representing the contents of a subdirectory. Since trees and blobs, like all other objects, are named by the SHA-1 hash of their contents, two trees have the same SHA-1 name if and only if their contents (including, recursively, the contents of all subdirectories) are identical. This allows Git to quickly determine the differences between two related tree objects, since it can ignore any entries with identical object names. (Note: in the presence of submodules, trees may also have commits as entries. See [Submodules](#submodules) for documentation.) Note that the files all have mode 644 or 755: Git actually only pays attention to the executable bit. #### Blob Object You can use [git-show[1]](git-show) to examine the contents of a blob; take, for example, the blob in the entry for `COPYING` from the tree above: ``` $ git show 6ff87c4664 Note that the only valid version of the GPL as far as this project is concerned is _this_ particular version of the license (ie v2, not v2.2 or v3.x or whatever), unless explicitly otherwise stated. ... ``` A "blob" object is nothing but a binary blob of data. It doesn’t refer to anything else or have attributes of any kind. Since the blob is entirely defined by its data, if two files in a directory tree (or in multiple different versions of the repository) have the same contents, they will share the same blob object. The object is totally independent of its location in the directory tree, and renaming a file does not change the object that file is associated with. Note that any tree or blob object can be examined using [git-show[1]](git-show) with the <revision>:<path> syntax. This can sometimes be useful for browsing the contents of a tree that is not currently checked out. #### Trust If you receive the SHA-1 name of a blob from one source, and its contents from another (possibly untrusted) source, you can still trust that those contents are correct as long as the SHA-1 name agrees. This is because the SHA-1 is designed so that it is infeasible to find different contents that produce the same hash. Similarly, you need only trust the SHA-1 name of a top-level tree object to trust the contents of the entire directory that it refers to, and if you receive the SHA-1 name of a commit from a trusted source, then you can easily verify the entire history of commits reachable through parents of that commit, and all of those contents of the trees referred to by those commits. So to introduce some real trust in the system, the only thing you need to do is to digitally sign just `one` special note, which includes the name of a top-level commit. Your digital signature shows others that you trust that commit, and the immutability of the history of commits tells others that they can trust the whole history. In other words, you can easily validate a whole archive by just sending out a single email that tells the people the name (SHA-1 hash) of the top commit, and digitally sign that email using something like GPG/PGP. To assist in this, Git also provides the tag object…​ #### Tag Object A tag object contains an object, object type, tag name, the name of the person ("tagger") who created the tag, and a message, which may contain a signature, as can be seen using [git-cat-file[1]](git-cat-file): ``` $ git cat-file tag v1.5.0 object 437b1b20df4b356c9342dac8d38849f24ef44f27 type commit tag v1.5.0 tagger Junio C Hamano <[email protected]> 1171411200 +0000 GIT 1.5.0 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBF0lGqwMbZpPMRm5oRAuRiAJ9ohBLd7s2kqjkKlq1qqC57SbnmzQCdG4ui nLE/L9aUXdWeTFPron96DLA= =2E+0 -----END PGP SIGNATURE----- ``` See the [git-tag[1]](git-tag) command to learn how to create and verify tag objects. (Note that [git-tag[1]](git-tag) can also be used to create "lightweight tags", which are not tag objects at all, but just simple references whose names begin with `refs/tags/`). #### How Git stores objects efficiently: pack files Newly created objects are initially created in a file named after the object’s SHA-1 hash (stored in `.git/objects`). Unfortunately this system becomes inefficient once a project has a lot of objects. Try this on an old project: ``` $ git count-objects 6930 objects, 47620 kilobytes ``` The first number is the number of objects which are kept in individual files. The second is the amount of space taken up by those "loose" objects. You can save space and make Git faster by moving these loose objects in to a "pack file", which stores a group of objects in an efficient compressed format; the details of how pack files are formatted can be found in [gitformat-pack[5]](gitformat-pack). To put the loose objects into a pack, just run git repack: ``` $ git repack Counting objects: 6020, done. Delta compression using up to 4 threads. Compressing objects: 100% (6020/6020), done. Writing objects: 100% (6020/6020), done. Total 6020 (delta 4070), reused 0 (delta 0) ``` This creates a single "pack file" in .git/objects/pack/ containing all currently unpacked objects. You can then run ``` $ git prune ``` to remove any of the "loose" objects that are now contained in the pack. This will also remove any unreferenced objects (which may be created when, for example, you use `git reset` to remove a commit). You can verify that the loose objects are gone by looking at the `.git/objects` directory or by running ``` $ git count-objects 0 objects, 0 kilobytes ``` Although the object files are gone, any commands that refer to those objects will work exactly as they did before. The [git-gc[1]](git-gc) command performs packing, pruning, and more for you, so is normally the only high-level command you need. #### Dangling objects The [git-fsck[1]](git-fsck) command will sometimes complain about dangling objects. They are not a problem. The most common cause of dangling objects is that you’ve rebased a branch, or you have pulled from somebody else who rebased a branch—​see [Rewriting history and maintaining patch series](#cleaning-up-history). In that case, the old head of the original branch still exists, as does everything it pointed to. The branch pointer itself just doesn’t, since you replaced it with another one. There are also other situations that cause dangling objects. For example, a "dangling blob" may arise because you did a `git add` of a file, but then, before you actually committed it and made it part of the bigger picture, you changed something else in that file and committed that **updated** thing—​the old state that you added originally ends up not being pointed to by any commit or tree, so it’s now a dangling blob object. Similarly, when the "ort" merge strategy runs, and finds that there are criss-cross merges and thus more than one merge base (which is fairly unusual, but it does happen), it will generate one temporary midway tree (or possibly even more, if you had lots of criss-crossing merges and more than two merge bases) as a temporary internal merge base, and again, those are real objects, but the end result will not end up pointing to them, so they end up "dangling" in your repository. Generally, dangling objects aren’t anything to worry about. They can even be very useful: if you screw something up, the dangling objects can be how you recover your old tree (say, you did a rebase, and realized that you really didn’t want to—​you can look at what dangling objects you have, and decide to reset your head to some old dangling state). For commits, you can just use: ``` $ gitk <dangling-commit-sha-goes-here> --not --all ``` This asks for all the history reachable from the given commit but not from any branch, tag, or other reference. If you decide it’s something you want, you can always create a new reference to it, e.g., ``` $ git branch recovered-branch <dangling-commit-sha-goes-here> ``` For blobs and trees, you can’t do the same, but you can still examine them. You can just do ``` $ git show <dangling-blob/tree-sha-goes-here> ``` to show what the contents of the blob were (or, for a tree, basically what the `ls` for that directory was), and that may give you some idea of what the operation was that left that dangling object. Usually, dangling blobs and trees aren’t very interesting. They’re almost always the result of either being a half-way mergebase (the blob will often even have the conflict markers from a merge in it, if you have had conflicting merges that you fixed up by hand), or simply because you interrupted a `git fetch` with ^C or something like that, leaving `some` of the new objects in the object database, but just dangling and useless. Anyway, once you are sure that you’re not interested in any dangling state, you can just prune all unreachable objects: ``` $ git prune ``` and they’ll be gone. (You should only run `git prune` on a quiescent repository—​it’s kind of like doing a filesystem fsck recovery: you don’t want to do that while the filesystem is mounted. `git prune` is designed not to cause any harm in such cases of concurrent accesses to a repository but you might receive confusing or scary messages.) #### Recovering from repository corruption By design, Git treats data trusted to it with caution. However, even in the absence of bugs in Git itself, it is still possible that hardware or operating system errors could corrupt data. The first defense against such problems is backups. You can back up a Git directory using clone, or just using cp, tar, or any other backup mechanism. As a last resort, you can search for the corrupted objects and attempt to replace them by hand. Back up your repository before attempting this in case you corrupt things even more in the process. We’ll assume that the problem is a single missing or corrupted blob, which is sometimes a solvable problem. (Recovering missing trees and especially commits is **much** harder). Before starting, verify that there is corruption, and figure out where it is with [git-fsck[1]](git-fsck); this may be time-consuming. Assume the output looks like this: ``` $ git fsck --full --no-dangling broken link from tree 2d9263c6d23595e7cb2a21e5ebbb53655278dff8 to blob 4b9458b3786228369c63936db65827de3cc06200 missing blob 4b9458b3786228369c63936db65827de3cc06200 ``` Now you know that blob 4b9458b3 is missing, and that the tree 2d9263c6 points to it. If you could find just one copy of that missing blob object, possibly in some other repository, you could move it into `.git/objects/4b/9458b3...` and be done. Suppose you can’t. You can still examine the tree that pointed to it with [git-ls-tree[1]](git-ls-tree), which might output something like: ``` $ git ls-tree 2d9263c6d23595e7cb2a21e5ebbb53655278dff8 100644 blob 8d14531846b95bfa3564b58ccfb7913a034323b8 .gitignore 100644 blob ebf9bf84da0aab5ed944264a5db2a65fe3a3e883 .mailmap 100644 blob ca442d313d86dc67e0a2e5d584b465bd382cbf5c COPYING ... 100644 blob 4b9458b3786228369c63936db65827de3cc06200 myfile ... ``` So now you know that the missing blob was the data for a file named `myfile`. And chances are you can also identify the directory—​let’s say it’s in `somedirectory`. If you’re lucky the missing copy might be the same as the copy you have checked out in your working tree at `somedirectory/myfile`; you can test whether that’s right with [git-hash-object[1]](git-hash-object): ``` $ git hash-object -w somedirectory/myfile ``` which will create and store a blob object with the contents of somedirectory/myfile, and output the SHA-1 of that object. if you’re extremely lucky it might be 4b9458b3786228369c63936db65827de3cc06200, in which case you’ve guessed right, and the corruption is fixed! Otherwise, you need more information. How do you tell which version of the file has been lost? The easiest way to do this is with: ``` $ git log --raw --all --full-history -- somedirectory/myfile ``` Because you’re asking for raw output, you’ll now get something like ``` commit abc Author: Date: ... :100644 100644 4b9458b newsha M somedirectory/myfile commit xyz Author: Date: ... :100644 100644 oldsha 4b9458b M somedirectory/myfile ``` This tells you that the immediately following version of the file was "newsha", and that the immediately preceding version was "oldsha". You also know the commit messages that went with the change from oldsha to 4b9458b and with the change from 4b9458b to newsha. If you’ve been committing small enough changes, you may now have a good shot at reconstructing the contents of the in-between state 4b9458b. If you can do that, you can now recreate the missing object with ``` $ git hash-object -w <recreated-file> ``` and your repository is good again! (Btw, you could have ignored the `fsck`, and started with doing a ``` $ git log --raw --all ``` and just looked for the sha of the missing object (4b9458b) in that whole thing. It’s up to you—​Git does **have** a lot of information, it is just missing one particular blob version. ### The index The index is a binary file (generally kept in `.git/index`) containing a sorted list of path names, each with permissions and the SHA-1 of a blob object; [git-ls-files[1]](git-ls-files) can show you the contents of the index: ``` $ git ls-files --stage 100644 63c918c667fa005ff12ad89437f2fdc80926e21c 0 .gitignore 100644 5529b198e8d14decbe4ad99db3f7fb632de0439d 0 .mailmap 100644 6ff87c4664981e4397625791c8ea3bbb5f2279a3 0 COPYING 100644 a37b2152bd26be2c2289e1f57a292534a51a93c7 0 Documentation/.gitignore 100644 fbefe9a45b00a54b58d94d06eca48b03d40a50e0 0 Documentation/Makefile ... 100644 2511aef8d89ab52be5ec6a5e46236b4b6bcd07ea 0 xdiff/xtypes.h 100644 2ade97b2574a9f77e7ae4002a4e07a6a38e46d07 0 xdiff/xutils.c 100644 d5de8292e05e7c36c4b68857c1cf9855e3d2f70a 0 xdiff/xutils.h ``` Note that in older documentation you may see the index called the "current directory cache" or just the "cache". It has three important properties: 1. The index contains all the information necessary to generate a single (uniquely determined) tree object. For example, running [git-commit[1]](git-commit) generates this tree object from the index, stores it in the object database, and uses it as the tree object associated with the new commit. 2. The index enables fast comparisons between the tree object it defines and the working tree. It does this by storing some additional data for each entry (such as the last modified time). This data is not displayed above, and is not stored in the created tree object, but it can be used to determine quickly which files in the working directory differ from what was stored in the index, and thus save Git from having to read all of the data from such files to look for changes. 3. It can efficiently represent information about merge conflicts between different tree objects, allowing each pathname to be associated with sufficient information about the trees involved that you can create a three-way merge between them. We saw in [Getting conflict-resolution help during a merge](#conflict-resolution) that during a merge the index can store multiple versions of a single file (called "stages"). The third column in the [git-ls-files[1]](git-ls-files) output above is the stage number, and will take on values other than 0 for files with merge conflicts. The index is thus a sort of temporary staging area, which is filled with a tree which you are in the process of working on. If you blow the index away entirely, you generally haven’t lost any information as long as you have the name of the tree that it described. Submodules ---------- Large projects are often composed of smaller, self-contained modules. For example, an embedded Linux distribution’s source tree would include every piece of software in the distribution with some local modifications; a movie player might need to build against a specific, known-working version of a decompression library; several independent programs might all share the same build scripts. With centralized revision control systems this is often accomplished by including every module in one single repository. Developers can check out all modules or only the modules they need to work with. They can even modify files across several modules in a single commit while moving things around or updating APIs and translations. Git does not allow partial checkouts, so duplicating this approach in Git would force developers to keep a local copy of modules they are not interested in touching. Commits in an enormous checkout would be slower than you’d expect as Git would have to scan every directory for changes. If modules have a lot of local history, clones would take forever. On the plus side, distributed revision control systems can much better integrate with external sources. In a centralized model, a single arbitrary snapshot of the external project is exported from its own revision control and then imported into the local revision control on a vendor branch. All the history is hidden. With distributed revision control you can clone the entire external history and much more easily follow development and re-merge local changes. Git’s submodule support allows a repository to contain, as a subdirectory, a checkout of an external project. Submodules maintain their own identity; the submodule support just stores the submodule repository location and commit ID, so other developers who clone the containing project ("superproject") can easily clone all the submodules at the same revision. Partial checkouts of the superproject are possible: you can tell Git to clone none, some or all of the submodules. The [git-submodule[1]](git-submodule) command is available since Git 1.5.3. Users with Git 1.5.2 can look up the submodule commits in the repository and manually check them out; earlier versions won’t recognize the submodules at all. To see how submodule support works, create four example repositories that can be used later as a submodule: ``` $ mkdir ~/git $ cd ~/git $ for i in a b c d do mkdir $i cd $i git init echo "module $i" > $i.txt git add $i.txt git commit -m "Initial commit, submodule $i" cd .. done ``` Now create the superproject and add all the submodules: ``` $ mkdir super $ cd super $ git init $ for i in a b c d do git submodule add ~/git/$i $i done ``` | | | | --- | --- | | Note | Do not use local URLs here if you plan to publish your superproject! | See what files `git submodule` created: ``` $ ls -a . .. .git .gitmodules a b c d ``` The `git submodule add <repo> <path>` command does a couple of things: * It clones the submodule from `<repo>` to the given `<path>` under the current directory and by default checks out the master branch. * It adds the submodule’s clone path to the [gitmodules[5]](gitmodules) file and adds this file to the index, ready to be committed. * It adds the submodule’s current commit ID to the index, ready to be committed. Commit the superproject: ``` $ git commit -m "Add submodules a, b, c and d." ``` Now clone the superproject: ``` $ cd .. $ git clone super cloned $ cd cloned ``` The submodule directories are there, but they’re empty: ``` $ ls -a a . .. $ git submodule status -d266b9873ad50488163457f025db7cdd9683d88b a -e81d457da15309b4fef4249aba9b50187999670d b -c1536a972b9affea0f16e0680ba87332dc059146 c -d96249ff5d57de5de093e6baff9e0aafa5276a74 d ``` | | | | --- | --- | | Note | The commit object names shown above would be different for you, but they should match the HEAD commit object names of your repositories. You can check it by running `git ls-remote ../a`. | Pulling down the submodules is a two-step process. First run `git submodule init` to add the submodule repository URLs to `.git/config`: ``` $ git submodule init ``` Now use `git submodule update` to clone the repositories and check out the commits specified in the superproject: ``` $ git submodule update $ cd a $ ls -a . .. .git a.txt ``` One major difference between `git submodule update` and `git submodule add` is that `git submodule update` checks out a specific commit, rather than the tip of a branch. It’s like checking out a tag: the head is detached, so you’re not working on a branch. ``` $ git branch * (detached from d266b98) master ``` If you want to make a change within a submodule and you have a detached head, then you should create or checkout a branch, make your changes, publish the change within the submodule, and then update the superproject to reference the new commit: ``` $ git switch master ``` or ``` $ git switch -c fix-up ``` then ``` $ echo "adding a line again" >> a.txt $ git commit -a -m "Updated the submodule from within the superproject." $ git push $ cd .. $ git diff diff --git a/a b/a index d266b98..261dfac 160000 --- a/a +++ b/a @@ -1 +1 @@ -Subproject commit d266b9873ad50488163457f025db7cdd9683d88b +Subproject commit 261dfac35cb99d380eb966e102c1197139f7fa24 $ git add a $ git commit -m "Updated submodule a." $ git push ``` You have to run `git submodule update` after `git pull` if you want to update submodules, too. ### Pitfalls with submodules Always publish the submodule change before publishing the change to the superproject that references it. If you forget to publish the submodule change, others won’t be able to clone the repository: ``` $ cd ~/git/super/a $ echo i added another line to this file >> a.txt $ git commit -a -m "doing it wrong this time" $ cd .. $ git add a $ git commit -m "Updated submodule a again." $ git push $ cd ~/git/cloned $ git pull $ git submodule update error: pathspec '261dfac35cb99d380eb966e102c1197139f7fa24' did not match any file(s) known to git. Did you forget to 'git add'? Unable to checkout '261dfac35cb99d380eb966e102c1197139f7fa24' in submodule path 'a' ``` In older Git versions it could be easily forgotten to commit new or modified files in a submodule, which silently leads to similar problems as not pushing the submodule changes. Starting with Git 1.7.0 both `git status` and `git diff` in the superproject show submodules as modified when they contain new or modified files to protect against accidentally committing such a state. `git diff` will also add a `-dirty` to the work tree side when generating patch output or used with the `--submodule` option: ``` $ git diff diff --git a/sub b/sub --- a/sub +++ b/sub @@ -1 +1 @@ -Subproject commit 3f356705649b5d566d97ff843cf193359229a453 +Subproject commit 3f356705649b5d566d97ff843cf193359229a453-dirty $ git diff --submodule Submodule sub 3f35670..3f35670-dirty: ``` You also should not rewind branches in a submodule beyond commits that were ever recorded in any superproject. It’s not safe to run `git submodule update` if you’ve made and committed changes within a submodule without checking out a branch first. They will be silently overwritten: ``` $ cat a.txt module a $ echo line added from private2 >> a.txt $ git commit -a -m "line added inside private2" $ cd .. $ git submodule update Submodule path 'a': checked out 'd266b9873ad50488163457f025db7cdd9683d88b' $ cd a $ cat a.txt module a ``` | | | | --- | --- | | Note | The changes are still visible in the submodule’s reflog. | If you have uncommitted changes in your submodule working tree, `git submodule update` will not overwrite them. Instead, you get the usual warning about not being able switch from a dirty branch. Low-level git operations ------------------------ Many of the higher-level commands were originally implemented as shell scripts using a smaller core of low-level Git commands. These can still be useful when doing unusual things with Git, or just as a way to understand its inner workings. ### Object access and manipulation The [git-cat-file[1]](git-cat-file) command can show the contents of any object, though the higher-level [git-show[1]](git-show) is usually more useful. The [git-commit-tree[1]](git-commit-tree) command allows constructing commits with arbitrary parents and trees. A tree can be created with [git-write-tree[1]](git-write-tree) and its data can be accessed by [git-ls-tree[1]](git-ls-tree). Two trees can be compared with [git-diff-tree[1]](git-diff-tree). A tag is created with [git-mktag[1]](git-mktag), and the signature can be verified by [git-verify-tag[1]](git-verify-tag), though it is normally simpler to use [git-tag[1]](git-tag) for both. ### The Workflow High-level operations such as [git-commit[1]](git-commit) and [git-restore[1]](git-restore) work by moving data between the working tree, the index, and the object database. Git provides low-level operations which perform each of these steps individually. Generally, all Git operations work on the index file. Some operations work **purely** on the index file (showing the current state of the index), but most operations move data between the index file and either the database or the working directory. Thus there are four main combinations: #### working directory → index The [git-update-index[1]](git-update-index) command updates the index with information from the working directory. You generally update the index information by just specifying the filename you want to update, like so: ``` $ git update-index filename ``` but to avoid common mistakes with filename globbing etc., the command will not normally add totally new entries or remove old entries, i.e. it will normally just update existing cache entries. To tell Git that yes, you really do realize that certain files no longer exist, or that new files should be added, you should use the `--remove` and `--add` flags respectively. NOTE! A `--remove` flag does `not` mean that subsequent filenames will necessarily be removed: if the files still exist in your directory structure, the index will be updated with their new status, not removed. The only thing `--remove` means is that update-index will be considering a removed file to be a valid thing, and if the file really does not exist any more, it will update the index accordingly. As a special case, you can also do `git update-index --refresh`, which will refresh the "stat" information of each index to match the current stat information. It will `not` update the object status itself, and it will only update the fields that are used to quickly test whether an object still matches its old backing store object. The previously introduced [git-add[1]](git-add) is just a wrapper for [git-update-index[1]](git-update-index). #### index → object database You write your current index file to a "tree" object with the program ``` $ git write-tree ``` that doesn’t come with any options—​it will just write out the current index into the set of tree objects that describe that state, and it will return the name of the resulting top-level tree. You can use that tree to re-generate the index at any time by going in the other direction: #### object database → index You read a "tree" file from the object database, and use that to populate (and overwrite—​don’t do this if your index contains any unsaved state that you might want to restore later!) your current index. Normal operation is just ``` $ git read-tree <SHA-1 of tree> ``` and your index file will now be equivalent to the tree that you saved earlier. However, that is only your `index` file: your working directory contents have not been modified. #### index → working directory You update your working directory from the index by "checking out" files. This is not a very common operation, since normally you’d just keep your files updated, and rather than write to your working directory, you’d tell the index files about the changes in your working directory (i.e. `git update-index`). However, if you decide to jump to a new version, or check out somebody else’s version, or just restore a previous tree, you’d populate your index file with read-tree, and then you need to check out the result with ``` $ git checkout-index filename ``` or, if you want to check out all of the index, use `-a`. NOTE! `git checkout-index` normally refuses to overwrite old files, so if you have an old version of the tree already checked out, you will need to use the `-f` flag (`before` the `-a` flag or the filename) to `force` the checkout. Finally, there are a few odds and ends which are not purely moving from one representation to the other: #### Tying it all together To commit a tree you have instantiated with `git write-tree`, you’d create a "commit" object that refers to that tree and the history behind it—​most notably the "parent" commits that preceded it in history. Normally a "commit" has one parent: the previous state of the tree before a certain change was made. However, sometimes it can have two or more parent commits, in which case we call it a "merge", due to the fact that such a commit brings together ("merges") two or more previous states represented by other commits. In other words, while a "tree" represents a particular directory state of a working directory, a "commit" represents that state in time, and explains how we got there. You create a commit object by giving it the tree that describes the state at the time of the commit, and a list of parents: ``` $ git commit-tree <tree> -p <parent> [(-p <parent2>)...] ``` and then giving the reason for the commit on stdin (either through redirection from a pipe or file, or by just typing it at the tty). `git commit-tree` will return the name of the object that represents that commit, and you should save it away for later use. Normally, you’d commit a new `HEAD` state, and while Git doesn’t care where you save the note about that state, in practice we tend to just write the result to the file pointed at by `.git/HEAD`, so that we can always see what the last committed state was. Here is a picture that illustrates how various pieces fit together: ``` commit-tree commit obj +----+ | | | | V V +-----------+ | Object DB | | Backing | | Store | +-----------+ ^ write-tree | | tree obj | | | | read-tree | | tree obj V +-----------+ | Index | | "cache" | +-----------+ update-index ^ blob obj | | | | checkout-index -u | | checkout-index stat | | blob obj V +-----------+ | Working | | Directory | +-----------+ ``` ### Examining the data You can examine the data represented in the object database and the index with various helper tools. For every object, you can use [git-cat-file[1]](git-cat-file) to examine details about the object: ``` $ git cat-file -t <objectname> ``` shows the type of the object, and once you have the type (which is usually implicit in where you find the object), you can use ``` $ git cat-file blob|tree|commit|tag <objectname> ``` to show its contents. NOTE! Trees have binary content, and as a result there is a special helper for showing that content, called `git ls-tree`, which turns the binary content into a more easily readable form. It’s especially instructive to look at "commit" objects, since those tend to be small and fairly self-explanatory. In particular, if you follow the convention of having the top commit name in `.git/HEAD`, you can do ``` $ git cat-file commit HEAD ``` to see what the top commit was. ### Merging multiple trees Git can help you perform a three-way merge, which can in turn be used for a many-way merge by repeating the merge procedure several times. The usual situation is that you only do one three-way merge (reconciling two lines of history) and commit the result, but if you like to, you can merge several branches in one go. To perform a three-way merge, you start with the two commits you want to merge, find their closest common parent (a third commit), and compare the trees corresponding to these three commits. To get the "base" for the merge, look up the common parent of two commits: ``` $ git merge-base <commit1> <commit2> ``` This prints the name of a commit they are both based on. You should now look up the tree objects of those commits, which you can easily do with ``` $ git cat-file commit <commitname> | head -1 ``` since the tree object information is always the first line in a commit object. Once you know the three trees you are going to merge (the one "original" tree, aka the common tree, and the two "result" trees, aka the branches you want to merge), you do a "merge" read into the index. This will complain if it has to throw away your old index contents, so you should make sure that you’ve committed those—​in fact you would normally always do a merge against your last commit (which should thus match what you have in your current index anyway). To do the merge, do ``` $ git read-tree -m -u <origtree> <yourtree> <targettree> ``` which will do all trivial merge operations for you directly in the index file, and you can just write the result out with `git write-tree`. ### Merging multiple trees, continued Sadly, many merges aren’t trivial. If there are files that have been added, moved or removed, or if both branches have modified the same file, you will be left with an index tree that contains "merge entries" in it. Such an index tree can `NOT` be written out to a tree object, and you will have to resolve any such merge clashes using other tools before you can write out the result. You can examine such index state with `git ls-files --unmerged` command. An example: ``` $ git read-tree -m $orig HEAD $target $ git ls-files --unmerged 100644 263414f423d0e4d70dae8fe53fa34614ff3e2860 1 hello.c 100644 06fa6a24256dc7e560efa5687fa84b51f0263c3a 2 hello.c 100644 cc44c73eb783565da5831b4d820c962954019b69 3 hello.c ``` Each line of the `git ls-files --unmerged` output begins with the blob mode bits, blob SHA-1, `stage number`, and the filename. The `stage number` is Git’s way to say which tree it came from: stage 1 corresponds to the `$orig` tree, stage 2 to the `HEAD` tree, and stage 3 to the `$target` tree. Earlier we said that trivial merges are done inside `git read-tree -m`. For example, if the file did not change from `$orig` to `HEAD` or `$target`, or if the file changed from `$orig` to `HEAD` and `$orig` to `$target` the same way, obviously the final outcome is what is in `HEAD`. What the above example shows is that file `hello.c` was changed from `$orig` to `HEAD` and `$orig` to `$target` in a different way. You could resolve this by running your favorite 3-way merge program, e.g. `diff3`, `merge`, or Git’s own merge-file, on the blob objects from these three stages yourself, like this: ``` $ git cat-file blob 263414f >hello.c~1 $ git cat-file blob 06fa6a2 >hello.c~2 $ git cat-file blob cc44c73 >hello.c~3 $ git merge-file hello.c~2 hello.c~1 hello.c~3 ``` This would leave the merge result in `hello.c~2` file, along with conflict markers if there are conflicts. After verifying the merge result makes sense, you can tell Git what the final merge result for this file is by: ``` $ mv -f hello.c~2 hello.c $ git update-index hello.c ``` When a path is in the "unmerged" state, running `git update-index` for that path tells Git to mark the path resolved. The above is the description of a Git merge at the lowest level, to help you understand what conceptually happens under the hood. In practice, nobody, not even Git itself, runs `git cat-file` three times for this. There is a `git merge-index` program that extracts the stages to temporary files and calls a "merge" script on it: ``` $ git merge-index git-merge-one-file hello.c ``` and that is what higher level `git merge -s resolve` is implemented with. Hacking git ----------- This chapter covers internal details of the Git implementation which probably only Git developers need to understand. ### Object storage format All objects have a statically determined "type" which identifies the format of the object (i.e. how it is used, and how it can refer to other objects). There are currently four different object types: "blob", "tree", "commit", and "tag". Regardless of object type, all objects share the following characteristics: they are all deflated with zlib, and have a header that not only specifies their type, but also provides size information about the data in the object. It’s worth noting that the SHA-1 hash that is used to name the object is the hash of the original data plus this header, so `sha1sum` `file` does not match the object name for `file`. As a result, the general consistency of an object can always be tested independently of the contents or the type of the object: all objects can be validated by verifying that (a) their hashes match the content of the file and (b) the object successfully inflates to a stream of bytes that forms a sequence of `<ascii type without space> + <space> + <ascii decimal size> + <byte\0> + <binary object data>`. The structured objects can further have their structure and connectivity to other objects verified. This is generally done with the `git fsck` program, which generates a full dependency graph of all objects, and verifies their internal consistency (in addition to just verifying their superficial consistency through the hash). ### A birds-eye view of Git’s source code It is not always easy for new developers to find their way through Git’s source code. This section gives you a little guidance to show where to start. A good place to start is with the contents of the initial commit, with: ``` $ git switch --detach e83c5163 ``` The initial revision lays the foundation for almost everything Git has today, but is small enough to read in one sitting. Note that terminology has changed since that revision. For example, the README in that revision uses the word "changeset" to describe what we now call a [commit](#def_commit_object). Also, we do not call it "cache" any more, but rather "index"; however, the file is still called `cache.h`. Remark: Not much reason to change it now, especially since there is no good single name for it anyway, because it is basically `the` header file which is included by `all` of Git’s C sources. If you grasp the ideas in that initial commit, you should check out a more recent version and skim `cache.h`, `object.h` and `commit.h`. In the early days, Git (in the tradition of UNIX) was a bunch of programs which were extremely simple, and which you used in scripts, piping the output of one into another. This turned out to be good for initial development, since it was easier to test new things. However, recently many of these parts have become builtins, and some of the core has been "libified", i.e. put into libgit.a for performance, portability reasons, and to avoid code duplication. By now, you know what the index is (and find the corresponding data structures in `cache.h`), and that there are just a couple of object types (blobs, trees, commits and tags) which inherit their common structure from `struct object`, which is their first member (and thus, you can cast e.g. `(struct object *)commit` to achieve the `same` as `&commit->object`, i.e. get at the object name and flags). Now is a good point to take a break to let this information sink in. Next step: get familiar with the object naming. Read [Naming commits](#naming-commits). There are quite a few ways to name an object (and not only revisions!). All of these are handled in `sha1_name.c`. Just have a quick look at the function `get_sha1()`. A lot of the special handling is done by functions like `get_sha1_basic()` or the likes. This is just to get you into the groove for the most libified part of Git: the revision walker. Basically, the initial version of `git log` was a shell script: ``` $ git-rev-list --pretty $(git-rev-parse --default HEAD "$@") | \ LESS=-S ${PAGER:-less} ``` What does this mean? `git rev-list` is the original version of the revision walker, which `always` printed a list of revisions to stdout. It is still functional, and needs to, since most new Git commands start out as scripts using `git rev-list`. `git rev-parse` is not as important any more; it was only used to filter out options that were relevant for the different plumbing commands that were called by the script. Most of what `git rev-list` did is contained in `revision.c` and `revision.h`. It wraps the options in a struct named `rev_info`, which controls how and what revisions are walked, and more. The original job of `git rev-parse` is now taken by the function `setup_revisions()`, which parses the revisions and the common command-line options for the revision walker. This information is stored in the struct `rev_info` for later consumption. You can do your own command-line option parsing after calling `setup_revisions()`. After that, you have to call `prepare_revision_walk()` for initialization, and then you can get the commits one by one with the function `get_revision()`. If you are interested in more details of the revision walking process, just have a look at the first implementation of `cmd_log()`; call `git show v1.3.0~155^2~4` and scroll down to that function (note that you no longer need to call `setup_pager()` directly). Nowadays, `git log` is a builtin, which means that it is `contained` in the command `git`. The source side of a builtin is * a function called `cmd_<bla>`, typically defined in `builtin/<bla.c>` (note that older versions of Git used to have it in `builtin-<bla>.c` instead), and declared in `builtin.h`. * an entry in the `commands[]` array in `git.c`, and * an entry in `BUILTIN_OBJECTS` in the `Makefile`. Sometimes, more than one builtin is contained in one source file. For example, `cmd_whatchanged()` and `cmd_log()` both reside in `builtin/log.c`, since they share quite a bit of code. In that case, the commands which are `not` named like the `.c` file in which they live have to be listed in `BUILT_INS` in the `Makefile`. `git log` looks more complicated in C than it does in the original script, but that allows for a much greater flexibility and performance. Here again it is a good point to take a pause. Lesson three is: study the code. Really, it is the best way to learn about the organization of Git (after you know the basic concepts). So, think about something which you are interested in, say, "how can I access a blob just knowing the object name of it?". The first step is to find a Git command with which you can do it. In this example, it is either `git show` or `git cat-file`. For the sake of clarity, let’s stay with `git cat-file`, because it * is plumbing, and * was around even in the initial commit (it literally went only through some 20 revisions as `cat-file.c`, was renamed to `builtin/cat-file.c` when made a builtin, and then saw less than 10 versions). So, look into `builtin/cat-file.c`, search for `cmd_cat_file()` and look what it does. ``` git_config(git_default_config); if (argc != 3) usage("git cat-file [-t|-s|-e|-p|<type>] <sha1>"); if (get_sha1(argv[2], sha1)) die("Not a valid object name %s", argv[2]); ``` Let’s skip over the obvious details; the only really interesting part here is the call to `get_sha1()`. It tries to interpret `argv[2]` as an object name, and if it refers to an object which is present in the current repository, it writes the resulting SHA-1 into the variable `sha1`. Two things are interesting here: * `get_sha1()` returns 0 on `success`. This might surprise some new Git hackers, but there is a long tradition in UNIX to return different negative numbers in case of different errors—​and 0 on success. * the variable `sha1` in the function signature of `get_sha1()` is `unsigned char *`, but is actually expected to be a pointer to `unsigned char[20]`. This variable will contain the 160-bit SHA-1 of the given commit. Note that whenever a SHA-1 is passed as `unsigned char *`, it is the binary representation, as opposed to the ASCII representation in hex characters, which is passed as `char *`. You will see both of these things throughout the code. Now, for the meat: ``` case 0: buf = read_object_with_reference(sha1, argv[1], &size, NULL); ``` This is how you read a blob (actually, not only a blob, but any type of object). To know how the function `read_object_with_reference()` actually works, find the source code for it (something like `git grep read_object_with | grep ":[a-z]"` in the Git repository), and read the source. To find out how the result can be used, just read on in `cmd_cat_file()`: ``` write_or_die(1, buf, size); ``` Sometimes, you do not know where to look for a feature. In many such cases, it helps to search through the output of `git log`, and then `git show` the corresponding commit. Example: If you know that there was some test case for `git bundle`, but do not remember where it was (yes, you `could` `git grep bundle t/`, but that does not illustrate the point!): ``` $ git log --no-merges t/ ``` In the pager (`less`), just search for "bundle", go a few lines back, and see that it is in commit 18449ab0. Now just copy this object name, and paste it into the command line ``` $ git show 18449ab0 ``` Voila. Another example: Find out what to do in order to make some script a builtin: ``` $ git log --no-merges --diff-filter=A builtin/*.c ``` You see, Git is actually the best tool to find out about the source of Git itself! Git glossary ------------ ### Git explained alternate object database Via the alternates mechanism, a [repository](#def_repository) can inherit part of its [object database](#def_object_database) from another object database, which is called an "alternate". bare repository A bare repository is normally an appropriately named [directory](#def_directory) with a `.git` suffix that does not have a locally checked-out copy of any of the files under revision control. That is, all of the Git administrative and control files that would normally be present in the hidden `.git` sub-directory are directly present in the `repository.git` directory instead, and no other files are present and checked out. Usually publishers of public repositories make bare repositories available. blob object Untyped [object](#def_object), e.g. the contents of a file. branch A "branch" is a line of development. The most recent [commit](#def_commit) on a branch is referred to as the tip of that branch. The tip of the branch is [referenced](#def_ref) by a branch [head](#def_head), which moves forward as additional development is done on the branch. A single Git [repository](#def_repository) can track an arbitrary number of branches, but your [working tree](#def_working_tree) is associated with just one of them (the "current" or "checked out" branch), and [HEAD](#def_HEAD) points to that branch. cache Obsolete for: [index](#def_index). chain A list of objects, where each [object](#def_object) in the list contains a reference to its successor (for example, the successor of a [commit](#def_commit) could be one of its [parents](#def_parent)). changeset BitKeeper/cvsps speak for "[commit](#def_commit)". Since Git does not store changes, but states, it really does not make sense to use the term "changesets" with Git. checkout The action of updating all or part of the [working tree](#def_working_tree) with a [tree object](#def_tree_object) or [blob](#def_blob_object) from the [object database](#def_object_database), and updating the [index](#def_index) and [HEAD](#def_HEAD) if the whole working tree has been pointed at a new [branch](#def_branch). cherry-picking In [SCM](#def_SCM) jargon, "cherry pick" means to choose a subset of changes out of a series of changes (typically commits) and record them as a new series of changes on top of a different codebase. In Git, this is performed by the "git cherry-pick" command to extract the change introduced by an existing [commit](#def_commit) and to record it based on the tip of the current [branch](#def_branch) as a new commit. clean A [working tree](#def_working_tree) is clean, if it corresponds to the [revision](#def_revision) referenced by the current [head](#def_head). Also see "[dirty](#def_dirty)". commit As a noun: A single point in the Git history; the entire history of a project is represented as a set of interrelated commits. The word "commit" is often used by Git in the same places other revision control systems use the words "revision" or "version". Also used as a short hand for [commit object](#def_commit_object). As a verb: The action of storing a new snapshot of the project’s state in the Git history, by creating a new commit representing the current state of the [index](#def_index) and advancing [HEAD](#def_HEAD) to point at the new commit. commit graph concept, representations and usage A synonym for the [DAG](#def_DAG) structure formed by the commits in the object database, [referenced](#def_ref) by branch tips, using their [chain](#def_chain) of linked commits. This structure is the definitive commit graph. The graph can be represented in other ways, e.g. the ["commit-graph" file](#def_commit_graph_file). commit-graph file The "commit-graph" (normally hyphenated) file is a supplemental representation of the [commit graph](#def_commit_graph_general) which accelerates commit graph walks. The "commit-graph" file is stored either in the .git/objects/info directory or in the info directory of an alternate object database. commit object An [object](#def_object) which contains the information about a particular [revision](#def_revision), such as [parents](#def_parent), committer, author, date and the [tree object](#def_tree_object) which corresponds to the top [directory](#def_directory) of the stored revision. commit-ish (also committish) A [commit object](#def_commit_object) or an [object](#def_object) that can be recursively dereferenced to a commit object. The following are all commit-ishes: a commit object, a [tag object](#def_tag_object) that points to a commit object, a tag object that points to a tag object that points to a commit object, etc. core Git Fundamental data structures and utilities of Git. Exposes only limited source code management tools. DAG Directed acyclic graph. The [commit objects](#def_commit_object) form a directed acyclic graph, because they have parents (directed), and the graph of commit objects is acyclic (there is no [chain](#def_chain) which begins and ends with the same [object](#def_object)). dangling object An [unreachable object](#def_unreachable_object) which is not [reachable](#def_reachable) even from other unreachable objects; a dangling object has no references to it from any reference or [object](#def_object) in the [repository](#def_repository). detached HEAD Normally the [HEAD](#def_HEAD) stores the name of a [branch](#def_branch), and commands that operate on the history HEAD represents operate on the history leading to the tip of the branch the HEAD points at. However, Git also allows you to [check out](#def_checkout) an arbitrary [commit](#def_commit) that isn’t necessarily the tip of any particular branch. The HEAD in such a state is called "detached". Note that commands that operate on the history of the current branch (e.g. `git commit` to build a new history on top of it) still work while the HEAD is detached. They update the HEAD to point at the tip of the updated history without affecting any branch. Commands that update or inquire information `about` the current branch (e.g. `git branch --set-upstream-to` that sets what remote-tracking branch the current branch integrates with) obviously do not work, as there is no (real) current branch to ask about in this state. directory The list you get with "ls" :-) dirty A [working tree](#def_working_tree) is said to be "dirty" if it contains modifications which have not been [committed](#def_commit) to the current [branch](#def_branch). evil merge An evil merge is a [merge](#def_merge) that introduces changes that do not appear in any [parent](#def_parent). fast-forward A fast-forward is a special type of [merge](#def_merge) where you have a [revision](#def_revision) and you are "merging" another [branch](#def_branch)'s changes that happen to be a descendant of what you have. In such a case, you do not make a new [merge](#def_merge) [commit](#def_commit) but instead just update your branch to point at the same revision as the branch you are merging. This will happen frequently on a [remote-tracking branch](#def_remote_tracking_branch) of a remote [repository](#def_repository). fetch Fetching a [branch](#def_branch) means to get the branch’s [head ref](#def_head_ref) from a remote [repository](#def_repository), to find out which objects are missing from the local [object database](#def_object_database), and to get them, too. See also [git-fetch[1]](git-fetch). file system Linus Torvalds originally designed Git to be a user space file system, i.e. the infrastructure to hold files and directories. That ensured the efficiency and speed of Git. Git archive Synonym for [repository](#def_repository) (for arch people). gitfile A plain file `.git` at the root of a working tree that points at the directory that is the real repository. grafts Grafts enables two otherwise different lines of development to be joined together by recording fake ancestry information for commits. This way you can make Git pretend the set of [parents](#def_parent) a [commit](#def_commit) has is different from what was recorded when the commit was created. Configured via the `.git/info/grafts` file. Note that the grafts mechanism is outdated and can lead to problems transferring objects between repositories; see [git-replace[1]](git-replace) for a more flexible and robust system to do the same thing. hash In Git’s context, synonym for [object name](#def_object_name). head A [named reference](#def_ref) to the [commit](#def_commit) at the tip of a [branch](#def_branch). Heads are stored in a file in `$GIT_DIR/refs/heads/` directory, except when using packed refs. (See [git-pack-refs[1]](git-pack-refs).) HEAD The current [branch](#def_branch). In more detail: Your [working tree](#def_working_tree) is normally derived from the state of the tree referred to by HEAD. HEAD is a reference to one of the [heads](#def_head) in your repository, except when using a [detached HEAD](#def_detached_HEAD), in which case it directly references an arbitrary commit. head ref A synonym for [head](#def_head). hook During the normal execution of several Git commands, call-outs are made to optional scripts that allow a developer to add functionality or checking. Typically, the hooks allow for a command to be pre-verified and potentially aborted, and allow for a post-notification after the operation is done. The hook scripts are found in the `$GIT_DIR/hooks/` directory, and are enabled by simply removing the `.sample` suffix from the filename. In earlier versions of Git you had to make them executable. index A collection of files with stat information, whose contents are stored as objects. The index is a stored version of your [working tree](#def_working_tree). Truth be told, it can also contain a second, and even a third version of a working tree, which are used when [merging](#def_merge). index entry The information regarding a particular file, stored in the [index](#def_index). An index entry can be unmerged, if a [merge](#def_merge) was started, but not yet finished (i.e. if the index contains multiple versions of that file). master The default development [branch](#def_branch). Whenever you create a Git [repository](#def_repository), a branch named "master" is created, and becomes the active branch. In most cases, this contains the local development, though that is purely by convention and is not required. merge As a verb: To bring the contents of another [branch](#def_branch) (possibly from an external [repository](#def_repository)) into the current branch. In the case where the merged-in branch is from a different repository, this is done by first [fetching](#def_fetch) the remote branch and then merging the result into the current branch. This combination of fetch and merge operations is called a [pull](#def_pull). Merging is performed by an automatic process that identifies changes made since the branches diverged, and then applies all those changes together. In cases where changes conflict, manual intervention may be required to complete the merge. As a noun: unless it is a [fast-forward](#def_fast_forward), a successful merge results in the creation of a new [commit](#def_commit) representing the result of the merge, and having as [parents](#def_parent) the tips of the merged [branches](#def_branch). This commit is referred to as a "merge commit", or sometimes just a "merge". object The unit of storage in Git. It is uniquely identified by the [SHA-1](#def_SHA1) of its contents. Consequently, an object cannot be changed. object database Stores a set of "objects", and an individual [object](#def_object) is identified by its [object name](#def_object_name). The objects usually live in `$GIT_DIR/objects/`. object identifier (oid) Synonym for [object name](#def_object_name). object name The unique identifier of an [object](#def_object). The object name is usually represented by a 40 character hexadecimal string. Also colloquially called [SHA-1](#def_SHA1). object type One of the identifiers "[commit](#def_commit_object)", "[tree](#def_tree_object)", "[tag](#def_tag_object)" or "[blob](#def_blob_object)" describing the type of an [object](#def_object). octopus To [merge](#def_merge) more than two [branches](#def_branch). origin The default upstream [repository](#def_repository). Most projects have at least one upstream project which they track. By default `origin` is used for that purpose. New upstream updates will be fetched into [remote-tracking branches](#def_remote_tracking_branch) named origin/name-of-upstream-branch, which you can see using `git branch -r`. overlay Only update and add files to the working directory, but don’t delete them, similar to how `cp -R` would update the contents in the destination directory. This is the default mode in a [checkout](#def_checkout) when checking out files from the [index](#def_index) or a [tree-ish](#def_tree-ish). In contrast, no-overlay mode also deletes tracked files not present in the source, similar to `rsync --delete`. pack A set of objects which have been compressed into one file (to save space or to transmit them efficiently). pack index The list of identifiers, and other information, of the objects in a [pack](#def_pack), to assist in efficiently accessing the contents of a pack. pathspec Pattern used to limit paths in Git commands. Pathspecs are used on the command line of "git ls-files", "git ls-tree", "git add", "git grep", "git diff", "git checkout", and many other commands to limit the scope of operations to some subset of the tree or working tree. See the documentation of each command for whether paths are relative to the current directory or toplevel. The pathspec syntax is as follows: * any path matches itself * the pathspec up to the last slash represents a directory prefix. The scope of that pathspec is limited to that subtree. * the rest of the pathspec is a pattern for the remainder of the pathname. Paths relative to the directory prefix will be matched against that pattern using fnmatch(3); in particular, `*` and `?` `can` match directory separators. For example, Documentation/\*.jpg will match all .jpg files in the Documentation subtree, including Documentation/chapter\_1/figure\_1.jpg. A pathspec that begins with a colon `:` has special meaning. In the short form, the leading colon `:` is followed by zero or more "magic signature" letters (which optionally is terminated by another colon `:`), and the remainder is the pattern to match against the path. The "magic signature" consists of ASCII symbols that are neither alphanumeric, glob, regex special characters nor colon. The optional colon that terminates the "magic signature" can be omitted if the pattern begins with a character that does not belong to "magic signature" symbol set and is not a colon. In the long form, the leading colon `:` is followed by an open parenthesis `(`, a comma-separated list of zero or more "magic words", and a close parentheses `)`, and the remainder is the pattern to match against the path. A pathspec with only a colon means "there is no pathspec". This form should not be combined with other pathspec. top The magic word `top` (magic signature: `/`) makes the pattern match from the root of the working tree, even when you are running the command from inside a subdirectory. literal Wildcards in the pattern such as `*` or `?` are treated as literal characters. icase Case insensitive match. glob Git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM\_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, "Documentation/\*.html" matches "Documentation/git.html" but not "Documentation/ppc/ppc.html" or "tools/perf/Documentation/perf.html". Two consecutive asterisks ("`**`") in patterns matched against full pathname may have special meaning: * A leading "`**`" followed by a slash means match in all directories. For example, "`**/foo`" matches file or directory "`foo`" anywhere, the same as pattern "`foo`". "`**/foo/bar`" matches file or directory "`bar`" anywhere that is directly under directory "`foo`". * A trailing "`/**`" matches everything inside. For example, "`abc/**`" matches all files inside directory "abc", relative to the location of the `.gitignore` file, with infinite depth. * A slash followed by two consecutive asterisks then a slash matches zero or more directories. For example, "`a/**/b`" matches "`a/b`", "`a/x/b`", "`a/x/y/b`" and so on. * Other consecutive asterisks are considered invalid. Glob magic is incompatible with literal magic. attr After `attr:` comes a space separated list of "attribute requirements", all of which must be met in order for the path to be considered a match; this is in addition to the usual non-magic pathspec pattern matching. See [gitattributes[5]](gitattributes). Each of the attribute requirements for the path takes one of these forms: * "`ATTR`" requires that the attribute `ATTR` be set. * "`-ATTR`" requires that the attribute `ATTR` be unset. * "`ATTR=VALUE`" requires that the attribute `ATTR` be set to the string `VALUE`. * "`!ATTR`" requires that the attribute `ATTR` be unspecified. Note that when matching against a tree object, attributes are still obtained from working tree, not from the given tree object. exclude After a path matches any non-exclude pathspec, it will be run through all exclude pathspecs (magic signature: `!` or its synonym `^`). If it matches, the path is ignored. When there is no non-exclude pathspec, the exclusion is applied to the result set as if invoked without any pathspec. parent A [commit object](#def_commit_object) contains a (possibly empty) list of the logical predecessor(s) in the line of development, i.e. its parents. pickaxe The term [pickaxe](#def_pickaxe) refers to an option to the diffcore routines that help select changes that add or delete a given text string. With the `--pickaxe-all` option, it can be used to view the full [changeset](#def_changeset) that introduced or removed, say, a particular line of text. See [git-diff[1]](git-diff). plumbing Cute name for [core Git](#def_core_git). porcelain Cute name for programs and program suites depending on [core Git](#def_core_git), presenting a high level access to core Git. Porcelains expose more of a [SCM](#def_SCM) interface than the [plumbing](#def_plumbing). per-worktree ref Refs that are per-[worktree](#def_worktree), rather than global. This is presently only [HEAD](#def_HEAD) and any refs that start with `refs/bisect/`, but might later include other unusual refs. pseudoref Pseudorefs are a class of files under `$GIT_DIR` which behave like refs for the purposes of rev-parse, but which are treated specially by git. Pseudorefs both have names that are all-caps, and always start with a line consisting of a [SHA-1](#def_SHA1) followed by whitespace. So, HEAD is not a pseudoref, because it is sometimes a symbolic ref. They might optionally contain some additional data. `MERGE_HEAD` and `CHERRY_PICK_HEAD` are examples. Unlike [per-worktree refs](#def_per_worktree_ref), these files cannot be symbolic refs, and never have reflogs. They also cannot be updated through the normal ref update machinery. Instead, they are updated by directly writing to the files. However, they can be read as if they were refs, so `git rev-parse MERGE_HEAD` will work. pull Pulling a [branch](#def_branch) means to [fetch](#def_fetch) it and [merge](#def_merge) it. See also [git-pull[1]](git-pull). push Pushing a [branch](#def_branch) means to get the branch’s [head ref](#def_head_ref) from a remote [repository](#def_repository), find out if it is an ancestor to the branch’s local head ref, and in that case, putting all objects, which are [reachable](#def_reachable) from the local head ref, and which are missing from the remote repository, into the remote [object database](#def_object_database), and updating the remote head ref. If the remote [head](#def_head) is not an ancestor to the local head, the push fails. reachable All of the ancestors of a given [commit](#def_commit) are said to be "reachable" from that commit. More generally, one [object](#def_object) is reachable from another if we can reach the one from the other by a [chain](#def_chain) that follows [tags](#def_tag) to whatever they tag, [commits](#def_commit_object) to their parents or trees, and [trees](#def_tree_object) to the trees or [blobs](#def_blob_object) that they contain. reachability bitmaps Reachability bitmaps store information about the [reachability](#def_reachable) of a selected set of commits in a packfile, or a multi-pack index (MIDX), to speed up object search. The bitmaps are stored in a ".bitmap" file. A repository may have at most one bitmap file in use. The bitmap file may belong to either one pack, or the repository’s multi-pack index (if it exists). rebase To reapply a series of changes from a [branch](#def_branch) to a different base, and reset the [head](#def_head) of that branch to the result. ref A name that begins with `refs/` (e.g. `refs/heads/master`) that points to an [object name](#def_object_name) or another ref (the latter is called a [symbolic ref](#def_symref)). For convenience, a ref can sometimes be abbreviated when used as an argument to a Git command; see [gitrevisions[7]](gitrevisions) for details. Refs are stored in the [repository](#def_repository). The ref namespace is hierarchical. Different subhierarchies are used for different purposes (e.g. the `refs/heads/` hierarchy is used to represent local branches). There are a few special-purpose refs that do not begin with `refs/`. The most notable example is `HEAD`. reflog A reflog shows the local "history" of a ref. In other words, it can tell you what the 3rd last revision in `this` repository was, and what was the current state in `this` repository, yesterday 9:14pm. See [git-reflog[1]](git-reflog) for details. refspec A "refspec" is used by [fetch](#def_fetch) and [push](#def_push) to describe the mapping between remote [ref](#def_ref) and local ref. remote repository A [repository](#def_repository) which is used to track the same project but resides somewhere else. To communicate with remotes, see [fetch](#def_fetch) or [push](#def_push). remote-tracking branch A [ref](#def_ref) that is used to follow changes from another [repository](#def_repository). It typically looks like `refs/remotes/foo/bar` (indicating that it tracks a branch named `bar` in a remote named `foo`), and matches the right-hand-side of a configured fetch [refspec](#def_refspec). A remote-tracking branch should not contain direct modifications or have local commits made to it. repository A collection of [refs](#def_ref) together with an [object database](#def_object_database) containing all objects which are [reachable](#def_reachable) from the refs, possibly accompanied by meta data from one or more [porcelains](#def_porcelain). A repository can share an object database with other repositories via [alternates mechanism](#def_alternate_object_database). resolve The action of fixing up manually what a failed automatic [merge](#def_merge) left behind. revision Synonym for [commit](#def_commit) (the noun). rewind To throw away part of the development, i.e. to assign the [head](#def_head) to an earlier [revision](#def_revision). SCM Source code management (tool). SHA-1 "Secure Hash Algorithm 1"; a cryptographic hash function. In the context of Git used as a synonym for [object name](#def_object_name). shallow clone Mostly a synonym to [shallow repository](#def_shallow_repository) but the phrase makes it more explicit that it was created by running `git clone --depth=...` command. shallow repository A shallow [repository](#def_repository) has an incomplete history some of whose [commits](#def_commit) have [parents](#def_parent) cauterized away (in other words, Git is told to pretend that these commits do not have the parents, even though they are recorded in the [commit object](#def_commit_object)). This is sometimes useful when you are interested only in the recent history of a project even though the real history recorded in the upstream is much larger. A shallow repository is created by giving the `--depth` option to [git-clone[1]](git-clone), and its history can be later deepened with [git-fetch[1]](git-fetch). stash entry An [object](#def_object) used to temporarily store the contents of a [dirty](#def_dirty) working directory and the index for future reuse. submodule A [repository](#def_repository) that holds the history of a separate project inside another repository (the latter of which is called [superproject](#def_superproject)). superproject A [repository](#def_repository) that references repositories of other projects in its working tree as [submodules](#def_submodule). The superproject knows about the names of (but does not hold copies of) commit objects of the contained submodules. symref Symbolic reference: instead of containing the [SHA-1](#def_SHA1) id itself, it is of the format `ref: refs/some/thing` and when referenced, it recursively dereferences to this reference. `[HEAD](#def_HEAD)` is a prime example of a symref. Symbolic references are manipulated with the [git-symbolic-ref[1]](git-symbolic-ref) command. tag A [ref](#def_ref) under `refs/tags/` namespace that points to an object of an arbitrary type (typically a tag points to either a [tag](#def_tag_object) or a [commit object](#def_commit_object)). In contrast to a [head](#def_head), a tag is not updated by the `commit` command. A Git tag has nothing to do with a Lisp tag (which would be called an [object type](#def_object_type) in Git’s context). A tag is most typically used to mark a particular point in the commit ancestry [chain](#def_chain). tag object An [object](#def_object) containing a [ref](#def_ref) pointing to another object, which can contain a message just like a [commit object](#def_commit_object). It can also contain a (PGP) signature, in which case it is called a "signed tag object". topic branch A regular Git [branch](#def_branch) that is used by a developer to identify a conceptual line of development. Since branches are very easy and inexpensive, it is often desirable to have several small branches that each contain very well defined concepts or small incremental yet related changes. tree Either a [working tree](#def_working_tree), or a [tree object](#def_tree_object) together with the dependent [blob](#def_blob_object) and tree objects (i.e. a stored representation of a working tree). tree object An [object](#def_object) containing a list of file names and modes along with refs to the associated blob and/or tree objects. A [tree](#def_tree) is equivalent to a [directory](#def_directory). tree-ish (also treeish) A [tree object](#def_tree_object) or an [object](#def_object) that can be recursively dereferenced to a tree object. Dereferencing a [commit object](#def_commit_object) yields the tree object corresponding to the [revision](#def_revision)'s top [directory](#def_directory). The following are all tree-ishes: a [commit-ish](#def_commit-ish), a tree object, a [tag object](#def_tag_object) that points to a tree object, a tag object that points to a tag object that points to a tree object, etc. unmerged index An [index](#def_index) which contains unmerged [index entries](#def_index_entry). unreachable object An [object](#def_object) which is not [reachable](#def_reachable) from a [branch](#def_branch), [tag](#def_tag), or any other reference. upstream branch The default [branch](#def_branch) that is merged into the branch in question (or the branch in question is rebased onto). It is configured via branch.<name>.remote and branch.<name>.merge. If the upstream branch of `A` is `origin/B` sometimes we say "`A` is tracking `origin/B`". working tree The tree of actual checked out files. The working tree normally contains the contents of the [HEAD](#def_HEAD) commit’s tree, plus any local changes that you have made but not yet committed. worktree A repository can have zero (i.e. bare repository) or one or more worktrees attached to it. One "worktree" consists of a "working tree" and repository metadata, most of which are shared among other worktrees of a single repository, and some of which are maintained separately per worktree (e.g. the index, HEAD and pseudorefs like MERGE\_HEAD, per-worktree refs and per-worktree configuration file). Appendix a: git quick reference ------------------------------- This is a quick summary of the major commands; the previous chapters explain how these work in more detail. ### Creating a new repository From a tarball: ``` $ tar xzf project.tar.gz $ cd project $ git init Initialized empty Git repository in .git/ $ git add . $ git commit ``` From a remote repository: ``` $ git clone git://example.com/pub/project.git $ cd project ``` ### Managing branches ``` $ git branch # list all local branches in this repo $ git switch test # switch working directory to branch "test" $ git branch new # create branch "new" starting at current HEAD $ git branch -d new # delete branch "new" ``` Instead of basing a new branch on current HEAD (the default), use: ``` $ git branch new test # branch named "test" $ git branch new v2.6.15 # tag named v2.6.15 $ git branch new HEAD^ # commit before the most recent $ git branch new HEAD^^ # commit before that $ git branch new test~10 # ten commits before tip of branch "test" ``` Create and switch to a new branch at the same time: ``` $ git switch -c new v2.6.15 ``` Update and examine branches from the repository you cloned from: ``` $ git fetch # update $ git branch -r # list origin/master origin/next ... $ git switch -c masterwork origin/master ``` Fetch a branch from a different repository, and give it a new name in your repository: ``` $ git fetch git://example.com/project.git theirbranch:mybranch $ git fetch git://example.com/project.git v2.6.15:mybranch ``` Keep a list of repositories you work with regularly: ``` $ git remote add example git://example.com/project.git $ git remote # list remote repositories example origin $ git remote show example # get details * remote example URL: git://example.com/project.git Tracked remote branches master next ... $ git fetch example # update branches from example $ git branch -r # list all remote branches ``` ### Exploring history ``` $ gitk # visualize and browse history $ git log # list all commits $ git log src/ # ...modifying src/ $ git log v2.6.15..v2.6.16 # ...in v2.6.16, not in v2.6.15 $ git log master..test # ...in branch test, not in branch master $ git log test..master # ...in branch master, but not in test $ git log test...master # ...in one branch, not in both $ git log -S'foo()' # ...where difference contain "foo()" $ git log --since="2 weeks ago" $ git log -p # show patches as well $ git show # most recent commit $ git diff v2.6.15..v2.6.16 # diff between two tagged versions $ git diff v2.6.15..HEAD # diff with current head $ git grep "foo()" # search working directory for "foo()" $ git grep v2.6.15 "foo()" # search old tree for "foo()" $ git show v2.6.15:a.txt # look at old version of a.txt ``` Search for regressions: ``` $ git bisect start $ git bisect bad # current version is bad $ git bisect good v2.6.13-rc2 # last known good revision Bisecting: 675 revisions left to test after this # test here, then: $ git bisect good # if this revision is good, or $ git bisect bad # if this revision is bad. # repeat until done. ``` ### Making changes Make sure Git knows who to blame: ``` $ cat >>~/.gitconfig <<\EOF [user] name = Your Name Comes Here email = [email protected] EOF ``` Select file contents to include in the next commit, then make the commit: ``` $ git add a.txt # updated file $ git add b.txt # new file $ git rm c.txt # old file $ git commit ``` Or, prepare and create the commit in one step: ``` $ git commit d.txt # use latest content only of d.txt $ git commit -a # use latest content of all tracked files ``` ### Merging ``` $ git merge test # merge branch "test" into the current branch $ git pull git://example.com/project.git master # fetch and merge in remote branch $ git pull . test # equivalent to git merge test ``` ### Sharing your changes Importing or exporting patches: ``` $ git format-patch origin..HEAD # format a patch for each commit # in HEAD but not in origin $ git am mbox # import patches from the mailbox "mbox" ``` Fetch a branch in a different Git repository, then merge into the current branch: ``` $ git pull git://example.com/project.git theirbranch ``` Store the fetched branch into a local branch before merging into the current branch: ``` $ git pull git://example.com/project.git theirbranch:mybranch ``` After creating commits on a local branch, update the remote branch with your commits: ``` $ git push ssh://example.com/project.git mybranch:theirbranch ``` When remote and local branch are both named "test": ``` $ git push ssh://example.com/project.git test ``` Shortcut version for a frequently used remote repository: ``` $ git remote add example ssh://example.com/project.git $ git push example test ``` ### Repository maintenance Check for corruption: ``` $ git fsck ``` Recompress, remove unused cruft: ``` $ git gc ``` Appendix b: notes and todo list for this manual ----------------------------------------------- ### Todo list This is a work in progress. The basic requirements: * It must be readable in order, from beginning to end, by someone intelligent with a basic grasp of the UNIX command line, but without any special knowledge of Git. If necessary, any other prerequisites should be specifically mentioned as they arise. * Whenever possible, section headings should clearly describe the task they explain how to do, in language that requires no more knowledge than necessary: for example, "importing patches into a project" rather than "the `git am` command" Think about how to create a clear chapter dependency graph that will allow people to get to important topics without necessarily reading everything in between. Scan `Documentation/` for other stuff left out; in particular: * howto’s * some of `technical/`? * hooks * list of commands in [git[1]](git) Scan email archives for other stuff left out Scan man pages to see if any assume more background than this manual provides. Add more good examples. Entire sections of just cookbook examples might be a good idea; maybe make an "advanced examples" section a standard end-of-chapter section? Include cross-references to the glossary, where appropriate. Add a section on working with other version control systems, including CVS, Subversion, and just imports of series of release tarballs. Write a chapter on using plumbing and writing scripts. Alternates, clone -reference, etc. More on recovery from repository corruption. See: <https://lore.kernel.org/git/[email protected]/> <https://lore.kernel.org/git/[email protected]/>
programming_docs
git gitcli gitcli ====== Name ---- gitcli - Git command-line interface and conventions Synopsis -------- gitcli Description ----------- This manual describes the convention used throughout Git CLI. Many commands take revisions (most often "commits", but sometimes "tree-ish", depending on the context and command) and paths as their arguments. Here are the rules: * Options come first and then args. A subcommand may take dashed options (which may take their own arguments, e.g. "--max-parents 2") and arguments. You SHOULD give dashed options first and then arguments. Some commands may accept dashed options after you have already gave non-option arguments (which may make the command ambiguous), but you should not rely on it (because eventually we may find a way to fix these ambiguity by enforcing the "options then args" rule). * Revisions come first and then paths. E.g. in `git diff v1.0 v2.0 arch/x86 include/asm-x86`, `v1.0` and `v2.0` are revisions and `arch/x86` and `include/asm-x86` are paths. * When an argument can be misunderstood as either a revision or a path, they can be disambiguated by placing `--` between them. E.g. `git diff -- HEAD` is, "I have a file called HEAD in my work tree. Please show changes between the version I staged in the index and what I have in the work tree for that file", not "show difference between the HEAD commit and the work tree as a whole". You can say `git diff HEAD --` to ask for the latter. * Without disambiguating `--`, Git makes a reasonable guess, but errors out and asking you to disambiguate when ambiguous. E.g. if you have a file called HEAD in your work tree, `git diff HEAD` is ambiguous, and you have to say either `git diff HEAD --` or `git diff -- HEAD` to disambiguate. * Because `--` disambiguates revisions and paths in some commands, it cannot be used for those commands to separate options and revisions. You can use `--end-of-options` for this (it also works for commands that do not distinguish between revisions in paths, in which case it is simply an alias for `--`). When writing a script that is expected to handle random user-input, it is a good practice to make it explicit which arguments are which by placing disambiguating `--` at appropriate places. * Many commands allow wildcards in paths, but you need to protect them from getting globbed by the shell. These two mean different things: ``` $ git restore *.c $ git restore \*.c ``` The former lets your shell expand the fileglob, and you are asking the dot-C files in your working tree to be overwritten with the version in the index. The latter passes the `*.c` to Git, and you are asking the paths in the index that match the pattern to be checked out to your working tree. After running `git add hello.c; rm hello.c`, you will `not` see `hello.c` in your working tree with the former, but with the latter you will. * Just as the filesystem `.` (period) refers to the current directory, using a `.` as a repository name in Git (a dot-repository) is a relative path and means your current repository. Here are the rules regarding the "flags" that you should follow when you are scripting Git: * It’s preferred to use the non-dashed form of Git commands, which means that you should prefer `git foo` to `git-foo`. * Splitting short options to separate words (prefer `git foo -a -b` to `git foo -ab`, the latter may not even work). * When a command-line option takes an argument, use the `stuck` form. In other words, write `git foo -oArg` instead of `git foo -o Arg` for short options, and `git foo --long-opt=Arg` instead of `git foo --long-opt Arg` for long options. An option that takes optional option-argument must be written in the `stuck` form. * When you give a revision parameter to a command, make sure the parameter is not ambiguous with a name of a file in the work tree. E.g. do not write `git log -1 HEAD` but write `git log -1 HEAD --`; the former will not work if you happen to have a file called `HEAD` in the work tree. * Many commands allow a long option `--option` to be abbreviated only to their unique prefix (e.g. if there is no other option whose name begins with `opt`, you may be able to spell `--opt` to invoke the `--option` flag), but you should fully spell them out when writing your scripts; later versions of Git may introduce a new option whose name shares the same prefix, e.g. `--optimize`, to make a short prefix that used to be unique no longer unique. Enhanced option parser ---------------------- From the Git 1.5.4 series and further, many Git commands (not all of them at the time of the writing though) come with an enhanced option parser. Here is a list of the facilities provided by this option parser. ### Magic Options Commands which have the enhanced option parser activated all understand a couple of magic command-line options: -h gives a pretty printed usage of the command. ``` $ git describe -h usage: git describe [<options>] <commit-ish>* or: git describe [<options>] --dirty --contains find the tag that comes after the commit --debug debug search strategy on stderr --all use any ref --tags use any tag, even unannotated --long always use long format --abbrev[=<n>] use <n> digits to display SHA-1s ``` Note that some subcommand (e.g. `git grep`) may behave differently when there are things on the command line other than `-h`, but `git subcmd -h` without anything else on the command line is meant to consistently give the usage. --help-all Some Git commands take options that are only used for plumbing or that are deprecated, and such options are hidden from the default usage. This option gives the full list of options. ### Negating options Options with long option names can be negated by prefixing `--no-`. For example, `git branch` has the option `--track` which is `on` by default. You can use `--no-track` to override that behaviour. The same goes for `--color` and `--no-color`. ### Aggregating short options Commands that support the enhanced option parser allow you to aggregate short options. This means that you can for example use `git rm -rf` or `git clean -fdx`. ### Abbreviating long options Commands that support the enhanced option parser accepts unique prefix of a long option as if it is fully spelled out, but use this with a caution. For example, `git commit --amen` behaves as if you typed `git commit --amend`, but that is true only until a later version of Git introduces another option that shares the same prefix, e.g. `git commit --amenity` option. ### Separating argument from the option You can write the mandatory option parameter to an option as a separate word on the command line. That means that all the following uses work: ``` $ git foo --long-opt=Arg $ git foo --long-opt Arg $ git foo -oArg $ git foo -o Arg ``` However, this is **NOT** allowed for switches with an optional value, where the `stuck` form must be used: ``` $ git describe --abbrev HEAD # correct $ git describe --abbrev=10 HEAD # correct $ git describe --abbrev 10 HEAD # NOT WHAT YOU MEANT ``` Notes on frequently confused options ------------------------------------ Many commands that can work on files in the working tree and/or in the index can take `--cached` and/or `--index` options. Sometimes people incorrectly think that, because the index was originally called cache, these two are synonyms. They are **not** — these two options mean very different things. * The `--cached` option is used to ask a command that usually works on files in the working tree to **only** work with the index. For example, `git grep`, when used without a commit to specify from which commit to look for strings in, usually works on files in the working tree, but with the `--cached` option, it looks for strings in the index. * The `--index` option is used to ask a command that usually works on files in the working tree to **also** affect the index. For example, `git stash apply` usually merges changes recorded in a stash entry to the working tree, but with the `--index` option, it also merges changes to the index as well. `git apply` command can be used with `--cached` and `--index` (but not at the same time). Usually the command only affects the files in the working tree, but with `--index`, it patches both the files and their index entries, and with `--cached`, it modifies only the index entries. See also <https://lore.kernel.org/git/[email protected]/> and <https://lore.kernel.org/git/[email protected]/> for further information. Some other commands that also work on files in the working tree and/or in the index can take `--staged` and/or `--worktree`. * `--staged` is exactly like `--cached`, which is used to ask a command to only work on the index, not the working tree. * `--worktree` is the opposite, to ask a command to work on the working tree only, not the index. * The two options can be specified together to ask a command to work on both the index and the working tree. git git-cvsimport git-cvsimport ============= Name ---- git-cvsimport - Salvage your data out of another SCM people love to hate Synopsis -------- ``` git cvsimport [-o <branch-for-HEAD>] [-h] [-v] [-d <CVSROOT>] [-A <author-conv-file>] [-p <options-for-cvsps>] [-P <file>] [-C <git-repository>] [-z <fuzz>] [-i] [-k] [-u] [-s <subst>] [-a] [-m] [-M <regex>] [-S <regex>] [-L <commit-limit>] [-r <remote>] [-R] [<CVS-module>] ``` Description ----------- **WARNING:** `git cvsimport` uses cvsps version 2, which is considered deprecated; it does not work with cvsps version 3 and later. If you are performing a one-shot import of a CVS repository consider using [cvs2git](http://cvs2svn.tigris.org/cvs2git.html) or [cvs-fast-export](http://www.catb.org/esr/cvs-fast-export/). Imports a CVS repository into Git. It will either create a new repository, or incrementally import into an existing one. Splitting the CVS log into patch sets is done by `cvsps`. At least version 2.1 is required. **WARNING:** for certain situations the import leads to incorrect results. Please see the section [ISSUES](#issues) for further reference. You should **never** do any work of your own on the branches that are created by `git cvsimport`. By default initial import will create and populate a "master" branch from the CVS repository’s main branch which you’re free to work with; after that, you need to `git merge` incremental imports, or any CVS branches, yourself. It is advisable to specify a named remote via -r to separate and protect the incoming branches. If you intend to set up a shared public repository that all developers can read/write, or if you want to use [git-cvsserver[1]](git-cvsserver), then you probably want to make a bare clone of the imported repository, and use the clone as the shared repository. See [gitcvs-migration[7]](gitcvs-migration). Options ------- -v Verbosity: let `cvsimport` report what it is doing. -d <CVSROOT> The root of the CVS archive. May be local (a simple path) or remote; currently, only the :local:, :ext: and :pserver: access methods are supported. If not given, `git cvsimport` will try to read it from `CVS/Root`. If no such file exists, it checks for the `CVSROOT` environment variable. <CVS-module> The CVS module you want to import. Relative to <CVSROOT>. If not given, `git cvsimport` tries to read it from `CVS/Repository`. -C <target-dir> The Git repository to import to. If the directory doesn’t exist, it will be created. Default is the current directory. -r <remote> The Git remote to import this CVS repository into. Moves all CVS branches into remotes/<remote>/<branch> akin to the way `git clone` uses `origin` by default. -o <branch-for-HEAD> When no remote is specified (via -r) the `HEAD` branch from CVS is imported to the `origin` branch within the Git repository, as `HEAD` already has a special meaning for Git. When a remote is specified the `HEAD` branch is named remotes/<remote>/master mirroring `git clone` behaviour. Use this option if you want to import into a different branch. Use `-o master` for continuing an import that was initially done by the old cvs2git tool. -i Import-only: don’t perform a checkout after importing. This option ensures the working directory and index remain untouched and will not create them if they do not exist. -k Kill keywords: will extract files with `-kk` from the CVS archive to avoid noisy changesets. Highly recommended, but off by default to preserve compatibility with early imported trees. -u Convert underscores in tag and branch names to dots. -s <subst> Substitute the character "/" in branch names with <subst> -p <options-for-cvsps> Additional options for cvsps. The options `-u` and `-A` are implicit and should not be used here. If you need to pass multiple options, separate them with a comma. -z <fuzz> Pass the timestamp fuzz factor to cvsps, in seconds. If unset, cvsps defaults to 300s. -P <cvsps-output-file> Instead of calling cvsps, read the provided cvsps output file. Useful for debugging or when cvsps is being handled outside cvsimport. -m Attempt to detect merges based on the commit message. This option will enable default regexes that try to capture the source branch name from the commit message. -M <regex> Attempt to detect merges based on the commit message with a custom regex. It can be used with `-m` to enable the default regexes as well. You must escape forward slashes. The regex must capture the source branch name in $1. This option can be used several times to provide several detection regexes. -S <regex> Skip paths matching the regex. -a Import all commits, including recent ones. cvsimport by default skips commits that have a timestamp less than 10 minutes ago. -L <limit> Limit the number of commits imported. Workaround for cases where cvsimport leaks memory. -A <author-conv-file> CVS by default uses the Unix username when writing its commit logs. Using this option and an author-conv-file maps the name recorded in CVS to author name, e-mail and optional time zone: ``` exon=Andreas Ericsson <[email protected]> spawn=Simon Pawn <[email protected]> America/Chicago ``` `git cvsimport` will make it appear as those authors had their GIT\_AUTHOR\_NAME and GIT\_AUTHOR\_EMAIL set properly all along. If a time zone is specified, GIT\_AUTHOR\_DATE will have the corresponding offset applied. For convenience, this data is saved to `$GIT_DIR/cvs-authors` each time the `-A` option is provided and read from that same file each time `git cvsimport` is run. It is not recommended to use this feature if you intend to export changes back to CVS again later with `git cvsexportcommit`. -R Generate a `$GIT_DIR/cvs-revisions` file containing a mapping from CVS revision numbers to newly-created Git commit IDs. The generated file will contain one line for each (filename, revision) pair imported; each line will look like ``` src/widget.c 1.1 1d862f173cdc7325b6fa6d2ae1cfd61fd1b512b7 ``` The revision data is appended to the file if it already exists, for use when doing incremental imports. This option may be useful if you have CVS revision numbers stored in commit messages, bug-tracking systems, email archives, and the like. -h Print a short usage message and exit. Output ------ If `-v` is specified, the script reports what it is doing. Otherwise, success is indicated the Unix way, i.e. by simply exiting with a zero exit status. Issues ------ Problems related to timestamps: * If timestamps of commits in the CVS repository are not stable enough to be used for ordering commits changes may show up in the wrong order. * If any files were ever "cvs import"ed more than once (e.g., import of more than one vendor release) the HEAD contains the wrong content. * If the timestamp order of different files cross the revision order within the commit matching time window the order of commits may be wrong. Problems related to branches: * Branches on which no commits have been made are not imported. * All files from the branching point are added to a branch even if never added in CVS. * This applies to files added to the source branch **after** a daughter branch was created: if previously no commit was made on the daughter branch they will erroneously be added to the daughter branch in git. Problems related to tags: * Multiple tags on the same revision are not imported. If you suspect that any of these issues may apply to the repository you want to import, consider using cvs2git: * cvs2git (part of cvs2svn), `http://subversion.apache.org/` git git-show-branch git-show-branch =============== Name ---- git-show-branch - Show branches and their commits Synopsis -------- ``` git show-branch [-a | --all] [-r | --remotes] [--topo-order | --date-order] [--current] [--color[=<when>] | --no-color] [--sparse] [--more=<n> | --list | --independent | --merge-base] [--no-name | --sha1-name] [--topics] [(<rev> | <glob>)…​] git show-branch (-g | --reflog)[=<n>[,<base>]] [--list] [<ref>] ``` Description ----------- Shows the commit ancestry graph starting from the commits named with <rev>s or <glob>s (or all refs under refs/heads and/or refs/tags) semi-visually. It cannot show more than 29 branches and commits at a time. It uses `showbranch.default` multi-valued configuration items if no <rev> or <glob> is given on the command line. Options ------- <rev> Arbitrary extended SHA-1 expression (see [gitrevisions[7]](gitrevisions)) that typically names a branch head or a tag. <glob> A glob pattern that matches branch or tag names under refs/. For example, if you have many topic branches under refs/heads/topic, giving `topic/*` would show all of them. -r --remotes Show the remote-tracking branches. -a --all Show both remote-tracking branches and local branches. --current With this option, the command includes the current branch to the list of revs to be shown when it is not given on the command line. --topo-order By default, the branches and their commits are shown in reverse chronological order. This option makes them appear in topological order (i.e., descendant commits are shown before their parents). --date-order This option is similar to `--topo-order` in the sense that no parent comes before all of its children, but otherwise commits are ordered according to their commit date. --sparse By default, the output omits merges that are reachable from only one tip being shown. This option makes them visible. --more=<n> Usually the command stops output upon showing the commit that is the common ancestor of all the branches. This flag tells the command to go <n> more common commits beyond that. When <n> is negative, display only the <reference>s given, without showing the commit ancestry tree. --list Synonym to `--more=-1` --merge-base Instead of showing the commit list, determine possible merge bases for the specified commits. All merge bases will be contained in all specified commits. This is different from how [git-merge-base[1]](git-merge-base) handles the case of three or more commits. --independent Among the <reference>s given, display only the ones that cannot be reached from any other <reference>. --no-name Do not show naming strings for each commit. --sha1-name Instead of naming the commits using the path to reach them from heads (e.g. "master~2" to mean the grandparent of "master"), name them with the unique prefix of their object names. --topics Shows only commits that are NOT on the first branch given. This helps track topic branches by hiding any commit that is already in the main line of development. When given "git show-branch --topics master topic1 topic2", this will show the revisions given by "git rev-list ^master topic1 topic2" -g --reflog[=<n>[,<base>]] [<ref>] Shows <n> most recent ref-log entries for the given ref. If <base> is given, <n> entries going back from that entry. <base> can be specified as count or date. When no explicit <ref> parameter is given, it defaults to the current branch (or `HEAD` if it is detached). --color[=<when>] Color the status sign (one of these: `*` `!` `+` `-`) of each commit corresponding to the branch it’s in. The value must be always (the default), never, or auto. --no-color Turn off colored output, even when the configuration file gives the default to color output. Same as `--color=never`. Note that --more, --list, --independent and --merge-base options are mutually exclusive. Output ------ Given N <references>, the first N lines are the one-line description from their commit message. The branch head that is pointed at by $GIT\_DIR/HEAD is prefixed with an asterisk `*` character while other heads are prefixed with a `!` character. Following these N lines, one-line log for each commit is displayed, indented N places. If a commit is on the I-th branch, the I-th indentation character shows a `+` sign; otherwise it shows a space. Merge commits are denoted by a `-` sign. Each commit shows a short name that can be used as an extended SHA-1 to name that commit. The following example shows three branches, "master", "fixes" and "mhf": ``` $ git show-branch master fixes mhf * [master] Add 'git show-branch'. ! [fixes] Introduce "reset type" flag to "git reset" ! [mhf] Allow "+remote:local" refspec to cause --force when fetching. --- + [mhf] Allow "+remote:local" refspec to cause --force when fetching. + [mhf~1] Use git-octopus when pulling more than one heads. + [fixes] Introduce "reset type" flag to "git reset" + [mhf~2] "git fetch --force". + [mhf~3] Use .git/remote/origin, not .git/branches/origin. + [mhf~4] Make "git pull" and "git fetch" default to origin + [mhf~5] Infamous 'octopus merge' + [mhf~6] Retire git-parse-remote. + [mhf~7] Multi-head fetch. + [mhf~8] Start adding the $GIT_DIR/remotes/ support. *++ [master] Add 'git show-branch'. ``` These three branches all forked from a common commit, [master], whose commit message is "Add 'git show-branch'". The "fixes" branch adds one commit "Introduce "reset type" flag to "git reset"". The "mhf" branch adds many other commits. The current branch is "master". Examples -------- If you keep your primary branches immediately under `refs/heads`, and topic branches in subdirectories of it, having the following in the configuration file may help: ``` [showbranch] default = --topo-order default = heads/* ``` With this, `git show-branch` without extra parameters would show only the primary branches. In addition, if you happen to be on your topic branch, it is shown as well. ``` $ git show-branch --reflog="10,1 hour ago" --list master ``` shows 10 reflog entries going back from the tip as of 1 hour ago. Without `--list`, the output also shows how these tips are topologically related with each other. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: showBranch.default The default set of branches for [git-show-branch[1]](git-show-branch). See [git-show-branch[1]](git-show-branch).
programming_docs
git git-hook git-hook ======== Name ---- git-hook - Run git hooks Synopsis -------- ``` git hook run [--ignore-missing] <hook-name> [-- <hook-args>] ``` Description ----------- A command interface to running git hooks (see [githooks[5]](githooks)), for use by other scripted git commands. Subcommands ----------- run Run the `<hook-name>` hook. See [githooks[5]](githooks) for supported hook names. Any positional arguments to the hook should be passed after a mandatory `--` (or `--end-of-options`, see [gitcli[7]](gitcli)). See [githooks[5]](githooks) for arguments hooks might expect (if any). Options ------- --ignore-missing Ignore any missing hook by quietly returning zero. Used for tools that want to do a blind one-shot run of a hook that may or may not be present. See also -------- [githooks[5]](githooks) git git-credential-cache git-credential-cache ==================== Name ---- git-credential-cache - Helper to temporarily store passwords in memory Synopsis -------- ``` git config credential.helper 'cache [<options>]' ``` Description ----------- This command caches credentials in memory for use by future Git programs. The stored credentials never touch the disk, and are forgotten after a configurable timeout. The cache is accessible over a Unix domain socket, restricted to the current user by filesystem permissions. You probably don’t want to invoke this command directly; it is meant to be used as a credential helper by other parts of Git. See [gitcredentials[7]](gitcredentials) or `EXAMPLES` below. Options ------- --timeout <seconds> Number of seconds to cache credentials (default: 900). --socket <path> Use `<path>` to contact a running cache daemon (or start a new cache daemon if one is not started). Defaults to `$XDG_CACHE_HOME/git/credential/socket` unless `~/.git-credential-cache/` exists in which case `~/.git-credential-cache/socket` is used instead. If your home directory is on a network-mounted filesystem, you may need to change this to a local filesystem. You must specify an absolute path. Controlling the daemon ---------------------- If you would like the daemon to exit early, forgetting all cached credentials before their timeout, you can issue an `exit` action: ``` git credential-cache exit ``` Examples -------- The point of this helper is to reduce the number of times you must type your username or password. For example: ``` $ git config credential.helper cache $ git push http://example.com/repo.git Username: <type your username> Password: <type your password> [work for 5 more minutes] $ git push http://example.com/repo.git [your credentials are used automatically] ``` You can provide options via the credential.helper configuration variable (this example increases the cache time to 1 hour): ``` $ git config credential.helper 'cache --timeout=3600' ``` git git-maintenance git-maintenance =============== Name ---- git-maintenance - Run tasks to optimize Git repository data Synopsis -------- ``` git maintenance run [<options>] git maintenance start [--scheduler=<scheduler>] git maintenance (stop|register|unregister) [<options>] ``` Description ----------- Run tasks to optimize Git repository data, speeding up other Git commands and reducing storage requirements for the repository. Git commands that add repository data, such as `git add` or `git fetch`, are optimized for a responsive user experience. These commands do not take time to optimize the Git data, since such optimizations scale with the full size of the repository while these user commands each perform a relatively small action. The `git maintenance` command provides flexibility for how to optimize the Git repository. Subcommands ----------- run Run one or more maintenance tasks. If one or more `--task` options are specified, then those tasks are run in that order. Otherwise, the tasks are determined by which `maintenance.<task>.enabled` config options are true. By default, only `maintenance.gc.enabled` is true. start Start running maintenance on the current repository. This performs the same config updates as the `register` subcommand, then updates the background scheduler to run `git maintenance run --scheduled` on an hourly basis. stop Halt the background maintenance schedule. The current repository is not removed from the list of maintained repositories, in case the background maintenance is restarted later. register Initialize Git config values so any scheduled maintenance will start running on this repository. This adds the repository to the `maintenance.repo` config variable in the current user’s global config, or the config specified by --config-file option, and enables some recommended configuration values for `maintenance.<task>.schedule`. The tasks that are enabled are safe for running in the background without disrupting foreground processes. The `register` subcommand will also set the `maintenance.strategy` config value to `incremental`, if this value is not previously set. The `incremental` strategy uses the following schedule for each maintenance task: * `gc`: disabled. * `commit-graph`: hourly. * `prefetch`: hourly. * `loose-objects`: daily. * `incremental-repack`: daily. `git maintenance register` will also disable foreground maintenance by setting `maintenance.auto = false` in the current repository. This config setting will remain after a `git maintenance unregister` command. unregister Remove the current repository from background maintenance. This only removes the repository from the configured list. It does not stop the background maintenance processes from running. The `unregister` subcommand will report an error if the current repository is not already registered. Use the `--force` option to return success even when the current repository is not registered. Tasks ----- commit-graph The `commit-graph` job updates the `commit-graph` files incrementally, then verifies that the written data is correct. The incremental write is safe to run alongside concurrent Git processes since it will not expire `.graph` files that were in the previous `commit-graph-chain` file. They will be deleted by a later run based on the expiration delay. prefetch The `prefetch` task updates the object directory with the latest objects from all registered remotes. For each remote, a `git fetch` command is run. The configured refspec is modified to place all requested refs within `refs/prefetch/`. Also, tags are not updated. This is done to avoid disrupting the remote-tracking branches. The end users expect these refs to stay unmoved unless they initiate a fetch. With prefetch task, however, the objects necessary to complete a later real fetch would already be obtained, so the real fetch would go faster. In the ideal case, it will just become an update to a bunch of remote-tracking branches without any object transfer. gc Clean up unnecessary files and optimize the local repository. "GC" stands for "garbage collection," but this task performs many smaller tasks. This task can be expensive for large repositories, as it repacks all Git objects into a single pack-file. It can also be disruptive in some situations, as it deletes stale data. See [git-gc[1]](git-gc) for more details on garbage collection in Git. loose-objects The `loose-objects` job cleans up loose objects and places them into pack-files. In order to prevent race conditions with concurrent Git commands, it follows a two-step process. First, it deletes any loose objects that already exist in a pack-file; concurrent Git processes will examine the pack-file for the object data instead of the loose object. Second, it creates a new pack-file (starting with "loose-") containing a batch of loose objects. The batch size is limited to 50 thousand objects to prevent the job from taking too long on a repository with many loose objects. The `gc` task writes unreachable objects as loose objects to be cleaned up by a later step only if they are not re-added to a pack-file; for this reason it is not advisable to enable both the `loose-objects` and `gc` tasks at the same time. incremental-repack The `incremental-repack` job repacks the object directory using the `multi-pack-index` feature. In order to prevent race conditions with concurrent Git commands, it follows a two-step process. First, it calls `git multi-pack-index expire` to delete pack-files unreferenced by the `multi-pack-index` file. Second, it calls `git multi-pack-index repack` to select several small pack-files and repack them into a bigger one, and then update the `multi-pack-index` entries that refer to the small pack-files to refer to the new pack-file. This prepares those small pack-files for deletion upon the next run of `git multi-pack-index expire`. The selection of the small pack-files is such that the expected size of the big pack-file is at least the batch size; see the `--batch-size` option for the `repack` subcommand in [git-multi-pack-index[1]](git-multi-pack-index). The default batch-size is zero, which is a special case that attempts to repack all pack-files into a single pack-file. pack-refs The `pack-refs` task collects the loose reference files and collects them into a single file. This speeds up operations that need to iterate across many references. See [git-pack-refs[1]](git-pack-refs) for more information. Options ------- --auto When combined with the `run` subcommand, run maintenance tasks only if certain thresholds are met. For example, the `gc` task runs when the number of loose objects exceeds the number stored in the `gc.auto` config setting, or when the number of pack-files exceeds the `gc.autoPackLimit` config setting. Not compatible with the `--schedule` option. --schedule When combined with the `run` subcommand, run maintenance tasks only if certain time conditions are met, as specified by the `maintenance.<task>.schedule` config value for each `<task>`. This config value specifies a number of seconds since the last time that task ran, according to the `maintenance.<task>.lastRun` config value. The tasks that are tested are those provided by the `--task=<task>` option(s) or those with `maintenance.<task>.enabled` set to true. --quiet Do not report progress or other information over `stderr`. --task=<task> If this option is specified one or more times, then only run the specified tasks in the specified order. If no `--task=<task>` arguments are specified, then only the tasks with `maintenance.<task>.enabled` configured as `true` are considered. See the `TASKS` section for the list of accepted `<task>` values. --scheduler=auto|crontab|systemd-timer|launchctl|schtasks When combined with the `start` subcommand, specify the scheduler for running the hourly, daily and weekly executions of `git maintenance run`. Possible values for `<scheduler>` are `auto`, `crontab` (POSIX), `systemd-timer` (Linux), `launchctl` (macOS), and `schtasks` (Windows). When `auto` is specified, the appropriate platform-specific scheduler is used; on Linux, `systemd-timer` is used if available, otherwise `crontab`. Default is `auto`. Troubleshooting --------------- The `git maintenance` command is designed to simplify the repository maintenance patterns while minimizing user wait time during Git commands. A variety of configuration options are available to allow customizing this process. The default maintenance options focus on operations that complete quickly, even on large repositories. Users may find some cases where scheduled maintenance tasks do not run as frequently as intended. Each `git maintenance run` command takes a lock on the repository’s object database, and this prevents other concurrent `git maintenance run` commands from running on the same repository. Without this safeguard, competing processes could leave the repository in an unpredictable state. The background maintenance schedule runs `git maintenance run` processes on an hourly basis. Each run executes the "hourly" tasks. At midnight, that process also executes the "daily" tasks. At midnight on the first day of the week, that process also executes the "weekly" tasks. A single process iterates over each registered repository, performing the scheduled tasks for that frequency. Depending on the number of registered repositories and their sizes, this process may take longer than an hour. In this case, multiple `git maintenance run` commands may run on the same repository at the same time, colliding on the object database lock. This results in one of the two tasks not running. If you find that some maintenance windows are taking longer than one hour to complete, then consider reducing the complexity of your maintenance tasks. For example, the `gc` task is much slower than the `incremental-repack` task. However, this comes at a cost of a slightly larger object database. Consider moving more expensive tasks to be run less frequently. Expert users may consider scheduling their own maintenance tasks using a different schedule than is available through `git maintenance start` and Git configuration options. These users should be aware of the object database lock and how concurrent `git maintenance run` commands behave. Further, the `git gc` command should not be combined with `git maintenance run` commands. `git gc` modifies the object database but does not take the lock in the same way as `git maintenance run`. If possible, use `git maintenance run --task=gc` instead of `git gc`. The following sections describe the mechanisms put in place to run background maintenance by `git maintenance start` and how to customize them. Background maintenance on posix systems --------------------------------------- The standard mechanism for scheduling background tasks on POSIX systems is cron(8). This tool executes commands based on a given schedule. The current list of user-scheduled tasks can be found by running `crontab -l`. The schedule written by `git maintenance start` is similar to this: ``` # BEGIN GIT MAINTENANCE SCHEDULE # The following schedule was created by Git # Any edits made in this region might be # replaced in the future by a Git command. 0 1-23 * * * "/<path>/git" --exec-path="/<path>" for-each-repo --config=maintenance.repo maintenance run --schedule=hourly 0 0 * * 1-6 "/<path>/git" --exec-path="/<path>" for-each-repo --config=maintenance.repo maintenance run --schedule=daily 0 0 * * 0 "/<path>/git" --exec-path="/<path>" for-each-repo --config=maintenance.repo maintenance run --schedule=weekly # END GIT MAINTENANCE SCHEDULE ``` The comments are used as a region to mark the schedule as written by Git. Any modifications within this region will be completely deleted by `git maintenance stop` or overwritten by `git maintenance start`. The `crontab` entry specifies the full path of the `git` executable to ensure that the executed `git` command is the same one with which `git maintenance start` was issued independent of `PATH`. If the same user runs `git maintenance start` with multiple Git executables, then only the latest executable is used. These commands use `git for-each-repo --config=maintenance.repo` to run `git maintenance run --schedule=<frequency>` on each repository listed in the multi-valued `maintenance.repo` config option. These are typically loaded from the user-specific global config. The `git maintenance` process then determines which maintenance tasks are configured to run on each repository with each `<frequency>` using the `maintenance.<task>.schedule` config options. These values are loaded from the global or repository config values. If the config values are insufficient to achieve your desired background maintenance schedule, then you can create your own schedule. If you run `crontab -e`, then an editor will load with your user-specific `cron` schedule. In that editor, you can add your own schedule lines. You could start by adapting the default schedule listed earlier, or you could read the crontab(5) documentation for advanced scheduling techniques. Please do use the full path and `--exec-path` techniques from the default schedule to ensure you are executing the correct binaries in your schedule. Background maintenance on linux systemd systems ----------------------------------------------- While Linux supports `cron`, depending on the distribution, `cron` may be an optional package not necessarily installed. On modern Linux distributions, systemd timers are superseding it. If user systemd timers are available, they will be used as a replacement of `cron`. In this case, `git maintenance start` will create user systemd timer units and start the timers. The current list of user-scheduled tasks can be found by running `systemctl --user list-timers`. The timers written by `git maintenance start` are similar to this: ``` $ systemctl --user list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Thu 2021-04-29 19:00:00 CEST 42min left Thu 2021-04-29 18:00:11 CEST 17min ago [email protected] [email protected] Fri 2021-04-30 00:00:00 CEST 5h 42min left Thu 2021-04-29 00:00:11 CEST 18h ago [email protected] [email protected] Mon 2021-05-03 00:00:00 CEST 3 days left Mon 2021-04-26 00:00:11 CEST 3 days ago [email protected] [email protected] ``` One timer is registered for each `--schedule=<frequency>` option. The definition of the systemd units can be inspected in the following files: ``` ~/.config/systemd/user/[email protected] ~/.config/systemd/user/[email protected] ~/.config/systemd/user/timers.target.wants/[email protected] ~/.config/systemd/user/timers.target.wants/[email protected] ~/.config/systemd/user/timers.target.wants/[email protected] ``` `git maintenance start` will overwrite these files and start the timer again with `systemctl --user`, so any customization should be done by creating a drop-in file, i.e. a `.conf` suffixed file in the `~/.config/systemd/user/[email protected]` directory. `git maintenance stop` will stop the user systemd timers and delete the above mentioned files. For more details, see `systemd.timer(5)`. Background maintenance on macos systems --------------------------------------- While macOS technically supports `cron`, using `crontab -e` requires elevated privileges and the executed process does not have a full user context. Without a full user context, Git and its credential helpers cannot access stored credentials, so some maintenance tasks are not functional. Instead, `git maintenance start` interacts with the `launchctl` tool, which is the recommended way to schedule timed jobs in macOS. Scheduling maintenance through `git maintenance (start|stop)` requires some `launchctl` features available only in macOS 10.11 or later. Your user-specific scheduled tasks are stored as XML-formatted `.plist` files in `~/Library/LaunchAgents/`. You can see the currently-registered tasks using the following command: ``` $ ls ~/Library/LaunchAgents/org.git-scm.git* org.git-scm.git.daily.plist org.git-scm.git.hourly.plist org.git-scm.git.weekly.plist ``` One task is registered for each `--schedule=<frequency>` option. To inspect how the XML format describes each schedule, open one of these `.plist` files in an editor and inspect the `<array>` element following the `<key>StartCalendarInterval</key>` element. `git maintenance start` will overwrite these files and register the tasks again with `launchctl`, so any customizations should be done by creating your own `.plist` files with distinct names. Similarly, the `git maintenance stop` command will unregister the tasks with `launchctl` and delete the `.plist` files. To create more advanced customizations to your background tasks, see launchctl.plist(5) for more information. Background maintenance on windows systems ----------------------------------------- Windows does not support `cron` and instead has its own system for scheduling background tasks. The `git maintenance start` command uses the `schtasks` command to submit tasks to this system. You can inspect all background tasks using the Task Scheduler application. The tasks added by Git have names of the form `Git Maintenance (<frequency>)`. The Task Scheduler GUI has ways to inspect these tasks, but you can also export the tasks to XML files and view the details there. Note that since Git is a console application, these background tasks create a console window visible to the current user. This can be changed manually by selecting the "Run whether user is logged in or not" option in Task Scheduler. This change requires a password input, which is why `git maintenance start` does not select it by default. If you want to customize the background tasks, please rename the tasks so future calls to `git maintenance (start|stop)` do not overwrite your custom tasks. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: maintenance.auto This boolean config option controls whether some commands run `git maintenance run --auto` after doing their normal work. Defaults to true. maintenance.strategy This string config option provides a way to specify one of a few recommended schedules for background maintenance. This only affects which tasks are run during `git maintenance run --schedule=X` commands, provided no `--task=<task>` arguments are provided. Further, if a `maintenance.<task>.schedule` config value is set, then that value is used instead of the one provided by `maintenance.strategy`. The possible strategy strings are: * `none`: This default setting implies no task are run at any schedule. * `incremental`: This setting optimizes for performing small maintenance activities that do not delete any data. This does not schedule the `gc` task, but runs the `prefetch` and `commit-graph` tasks hourly, the `loose-objects` and `incremental-repack` tasks daily, and the `pack-refs` task weekly. maintenance.<task>.enabled This boolean config option controls whether the maintenance task with name `<task>` is run when no `--task` option is specified to `git maintenance run`. These config values are ignored if a `--task` option exists. By default, only `maintenance.gc.enabled` is true. maintenance.<task>.schedule This config option controls whether or not the given `<task>` runs during a `git maintenance run --schedule=<frequency>` command. The value must be one of "hourly", "daily", or "weekly". maintenance.commit-graph.auto This integer config option controls how often the `commit-graph` task should be run as part of `git maintenance run --auto`. If zero, then the `commit-graph` task will not run with the `--auto` option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of reachable commits that are not in the commit-graph file is at least the value of `maintenance.commit-graph.auto`. The default value is 100. maintenance.loose-objects.auto This integer config option controls how often the `loose-objects` task should be run as part of `git maintenance run --auto`. If zero, then the `loose-objects` task will not run with the `--auto` option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of loose objects is at least the value of `maintenance.loose-objects.auto`. The default value is 100. maintenance.incremental-repack.auto This integer config option controls how often the `incremental-repack` task should be run as part of `git maintenance run --auto`. If zero, then the `incremental-repack` task will not run with the `--auto` option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of pack-files not in the multi-pack-index is at least the value of `maintenance.incremental-repack.auto`. The default value is 10.
programming_docs
git git-config git-config ========== Name ---- git-config - Get and set repository or global options Synopsis -------- ``` git config [<file-option>] [--type=<type>] [--fixed-value] [--show-origin] [--show-scope] [-z|--null] <name> [<value> [<value-pattern>]] git config [<file-option>] [--type=<type>] --add <name> <value> git config [<file-option>] [--type=<type>] [--fixed-value] --replace-all <name> <value> [<value-pattern>] git config [<file-option>] [--type=<type>] [--show-origin] [--show-scope] [-z|--null] [--fixed-value] --get <name> [<value-pattern>] git config [<file-option>] [--type=<type>] [--show-origin] [--show-scope] [-z|--null] [--fixed-value] --get-all <name> [<value-pattern>] git config [<file-option>] [--type=<type>] [--show-origin] [--show-scope] [-z|--null] [--fixed-value] [--name-only] --get-regexp <name-regex> [<value-pattern>] git config [<file-option>] [--type=<type>] [-z|--null] --get-urlmatch <name> <URL> git config [<file-option>] [--fixed-value] --unset <name> [<value-pattern>] git config [<file-option>] [--fixed-value] --unset-all <name> [<value-pattern>] git config [<file-option>] --rename-section <old-name> <new-name> git config [<file-option>] --remove-section <name> git config [<file-option>] [--show-origin] [--show-scope] [-z|--null] [--name-only] -l | --list git config [<file-option>] --get-color <name> [<default>] git config [<file-option>] --get-colorbool <name> [<stdout-is-tty>] git config [<file-option>] -e | --edit ``` Description ----------- You can query/set/replace/unset options with this command. The name is actually the section and the key separated by a dot, and the value will be escaped. Multiple lines can be added to an option by using the `--add` option. If you want to update or unset an option which can occur on multiple lines, a `value-pattern` (which is an extended regular expression, unless the `--fixed-value` option is given) needs to be given. Only the existing values that match the pattern are updated or unset. If you want to handle the lines that do **not** match the pattern, just prepend a single exclamation mark in front (see also [EXAMPLES](#EXAMPLES)), but note that this only works when the `--fixed-value` option is not in use. The `--type=<type>` option instructs `git config` to ensure that incoming and outgoing values are canonicalize-able under the given <type>. If no `--type=<type>` is given, no canonicalization will be performed. Callers may unset an existing `--type` specifier with `--no-type`. When reading, the values are read from the system, global and repository local configuration files by default, and options `--system`, `--global`, `--local`, `--worktree` and `--file <filename>` can be used to tell the command to read from only that location (see [FILES](#FILES)). When writing, the new value is written to the repository local configuration file by default, and options `--system`, `--global`, `--worktree`, `--file <filename>` can be used to tell the command to write to that location (you can say `--local` but that is the default). This command will fail with non-zero status upon error. Some exit codes are: * The section or key is invalid (ret=1), * no section or name was provided (ret=2), * the config file is invalid (ret=3), * the config file cannot be written (ret=4), * you try to unset an option which does not exist (ret=5), * you try to unset/set an option for which multiple lines match (ret=5), or * you try to use an invalid regexp (ret=6). On success, the command returns the exit code 0. A list of all available configuration variables can be obtained using the `git help --config` command. Options ------- --replace-all Default behavior is to replace at most one line. This replaces all lines matching the key (and optionally the `value-pattern`). --add Adds a new line to the option without altering any existing values. This is the same as providing `^$` as the `value-pattern` in `--replace-all`. --get Get the value for a given key (optionally filtered by a regex matching the value). Returns error code 1 if the key was not found and the last value if multiple key values were found. --get-all Like get, but returns all values for a multi-valued key. --get-regexp Like --get-all, but interprets the name as a regular expression and writes out the key names. Regular expression matching is currently case-sensitive and done against a canonicalized version of the key in which section and variable names are lowercased, but subsection names are not. --get-urlmatch <name> <URL> When given a two-part name section.key, the value for section.<URL>.key whose <URL> part matches the best to the given URL is returned (if no such key exists, the value for section.key is used as a fallback). When given just the section as name, do so for all the keys in the section and list them. Returns error code 1 if no value is found. --global For writing options: write to global `~/.gitconfig` file rather than the repository `.git/config`, write to `$XDG_CONFIG_HOME/git/config` file if this file exists and the `~/.gitconfig` file doesn’t. For reading options: read only from global `~/.gitconfig` and from `$XDG_CONFIG_HOME/git/config` rather than from all available files. See also [FILES](#FILES). --system For writing options: write to system-wide `$(prefix)/etc/gitconfig` rather than the repository `.git/config`. For reading options: read only from system-wide `$(prefix)/etc/gitconfig` rather than from all available files. See also [FILES](#FILES). --local For writing options: write to the repository `.git/config` file. This is the default behavior. For reading options: read only from the repository `.git/config` rather than from all available files. See also [FILES](#FILES). --worktree Similar to `--local` except that `$GIT_DIR/config.worktree` is read from or written to if `extensions.worktreeConfig` is enabled. If not it’s the same as `--local`. Note that `$GIT_DIR` is equal to `$GIT_COMMON_DIR` for the main working tree, but is of the form `$GIT_DIR/worktrees/<id>/` for other working trees. See [git-worktree[1]](git-worktree) to learn how to enable `extensions.worktreeConfig`. -f <config-file> --file <config-file> For writing options: write to the specified file rather than the repository `.git/config`. For reading options: read only from the specified file rather than from all available files. See also [FILES](#FILES). --blob <blob> Similar to `--file` but use the given blob instead of a file. E.g. you can use `master:.gitmodules` to read values from the file `.gitmodules` in the master branch. See "SPECIFYING REVISIONS" section in [gitrevisions[7]](gitrevisions) for a more complete list of ways to spell blob names. --remove-section Remove the given section from the configuration file. --rename-section Rename the given section to a new name. --unset Remove the line matching the key from config file. --unset-all Remove all lines matching the key from config file. -l --list List all variables set in config file, along with their values. --fixed-value When used with the `value-pattern` argument, treat `value-pattern` as an exact string instead of a regular expression. This will restrict the name/value pairs that are matched to only those where the value is exactly equal to the `value-pattern`. --type <type> `git config` will ensure that any input or output is valid under the given type constraint(s), and will canonicalize outgoing values in `<type>`'s canonical form. Valid `<type>`'s include: * `bool`: canonicalize values as either "true" or "false". * `int`: canonicalize values as simple decimal numbers. An optional suffix of `k`, `m`, or `g` will cause the value to be multiplied by 1024, 1048576, or 1073741824 upon input. * `bool-or-int`: canonicalize according to either `bool` or `int`, as described above. * `path`: canonicalize by adding a leading `~` to the value of `$HOME` and `~user` to the home directory for the specified user. This specifier has no effect when setting the value (but you can use `git config section.variable ~/` from the command line to let your shell do the expansion.) * `expiry-date`: canonicalize by converting from a fixed or relative date-string to a timestamp. This specifier has no effect when setting the value. * `color`: When getting a value, canonicalize by converting to an ANSI color escape sequence. When setting a value, a sanity-check is performed to ensure that the given value is canonicalize-able as an ANSI color, but it is written as-is. --bool --int --bool-or-int --path --expiry-date Historical options for selecting a type specifier. Prefer instead `--type` (see above). --no-type Un-sets the previously set type specifier (if one was previously set). This option requests that `git config` not canonicalize the retrieved variable. `--no-type` has no effect without `--type=<type>` or `--<type>`. -z --null For all options that output values and/or keys, always end values with the null character (instead of a newline). Use newline instead as a delimiter between key and value. This allows for secure parsing of the output without getting confused e.g. by values that contain line breaks. --name-only Output only the names of config variables for `--list` or `--get-regexp`. --show-origin Augment the output of all queried config options with the origin type (file, standard input, blob, command line) and the actual origin (config file path, ref, or blob id if applicable). --show-scope Similar to `--show-origin` in that it augments the output of all queried config options with the scope of that value (worktree, local, global, system, command). --get-colorbool <name> [<stdout-is-tty>] Find the color setting for `<name>` (e.g. `color.diff`) and output "true" or "false". `<stdout-is-tty>` should be either "true" or "false", and is taken into account when configuration says "auto". If `<stdout-is-tty>` is missing, then checks the standard output of the command itself, and exits with status 0 if color is to be used, or exits with status 1 otherwise. When the color setting for `name` is undefined, the command uses `color.ui` as fallback. --get-color <name> [<default>] Find the color configured for `name` (e.g. `color.diff.new`) and output it as the ANSI color escape sequence to the standard output. The optional `default` parameter is used instead, if there is no color configured for `name`. `--type=color [--default=<default>]` is preferred over `--get-color` (but note that `--get-color` will omit the trailing newline printed by `--type=color`). -e --edit Opens an editor to modify the specified config file; either `--system`, `--global`, or repository (default). --[no-]includes Respect `include.*` directives in config files when looking up values. Defaults to `off` when a specific file is given (e.g., using `--file`, `--global`, etc) and `on` when searching all config files. --default <value> When using `--get`, and the requested variable is not found, behave as if <value> were the value assigned to the that variable. Configuration ------------- `pager.config` is only respected when listing configuration, i.e., when using `--list` or any of the `--get-*` which may return multiple results. The default is to use a pager. Files ----- By default, `git config` will read configuration options from multiple files: $(prefix)/etc/gitconfig System-wide configuration file. $XDG\_CONFIG\_HOME/git/config ~/.gitconfig User-specific configuration files. When the XDG\_CONFIG\_HOME environment variable is not set or empty, $HOME/.config/ is used as $XDG\_CONFIG\_HOME. These are also called "global" configuration files. If both files exist, both files are read in the order given above. $GIT\_DIR/config Repository specific configuration file. $GIT\_DIR/config.worktree This is optional and is only searched when `extensions.worktreeConfig` is present in $GIT\_DIR/config. You may also provide additional configuration parameters when running any git command by using the `-c` option. See [git[1]](git) for details. Options will be read from all of these files that are available. If the global or the system-wide configuration files are missing or unreadable they will be ignored. If the repository configuration file is missing or unreadable, `git config` will exit with a non-zero error code. An error message is produced if the file is unreadable, but not if it is missing. The files are read in the order given above, with last value found taking precedence over values read earlier. When multiple values are taken then all values of a key from all files will be used. By default, options are only written to the repository specific configuration file. Note that this also affects options like `--replace-all` and `--unset`. ***git config* will only ever change one file at a time**. You can limit which configuration sources are read from or written to by specifying the path of a file with the `--file` option, or by specifying a configuration scope with `--system`, `--global`, `--local`, or `--worktree`. For more, see [OPTIONS](#OPTIONS) above. Scopes ------ Each configuration source falls within a configuration scope. The scopes are: system $(prefix)/etc/gitconfig global $XDG\_CONFIG\_HOME/git/config ~/.gitconfig local $GIT\_DIR/config worktree $GIT\_DIR/config.worktree command GIT\_CONFIG\_{COUNT,KEY,VALUE} environment variables (see [ENVIRONMENT](#ENVIRONMENT) below) the `-c` option With the exception of `command`, each scope corresponds to a command line option: `--system`, `--global`, `--local`, `--worktree`. When reading options, specifying a scope will only read options from the files within that scope. When writing options, specifying a scope will write to the files within that scope (instead of the repository specific configuration file). See [OPTIONS](#OPTIONS) above for a complete description. Most configuration options are respected regardless of the scope it is defined in, but some options are only respected in certain scopes. See the respective option’s documentation for the full details. ### Protected configuration Protected configuration refers to the `system`, `global`, and `command` scopes. For security reasons, certain options are only respected when they are specified in protected configuration, and ignored otherwise. Git treats these scopes as if they are controlled by the user or a trusted administrator. This is because an attacker who controls these scopes can do substantial harm without using Git, so it is assumed that the user’s environment protects these scopes against attackers. Environment ----------- GIT\_CONFIG\_GLOBAL GIT\_CONFIG\_SYSTEM Take the configuration from the given files instead from global or system-level configuration. See [git[1]](git) for details. GIT\_CONFIG\_NOSYSTEM Whether to skip reading settings from the system-wide $(prefix)/etc/gitconfig file. See [git[1]](git) for details. See also [FILES](#FILES). GIT\_CONFIG\_COUNT GIT\_CONFIG\_KEY\_<n> GIT\_CONFIG\_VALUE\_<n> If GIT\_CONFIG\_COUNT is set to a positive number, all environment pairs GIT\_CONFIG\_KEY\_<n> and GIT\_CONFIG\_VALUE\_<n> up to that number will be added to the process’s runtime configuration. The config pairs are zero-indexed. Any missing key or value is treated as an error. An empty GIT\_CONFIG\_COUNT is treated the same as GIT\_CONFIG\_COUNT=0, namely no pairs are processed. These environment variables will override values in configuration files, but will be overridden by any explicit options passed via `git -c`. This is useful for cases where you want to spawn multiple git commands with a common configuration but cannot depend on a configuration file, for example when writing scripts. GIT\_CONFIG If no `--file` option is provided to `git config`, use the file given by `GIT_CONFIG` as if it were provided via `--file`. This variable has no effect on other Git commands, and is mostly for historical compatibility; there is generally no reason to use it instead of the `--file` option. Examples -------- Given a .git/config like this: ``` # # This is the config file, and # a '#' or ';' character indicates # a comment # ; core variables [core] ; Don't trust file modes filemode = false ; Our diff algorithm [diff] external = /usr/local/bin/diff-wrapper renames = true ; Proxy settings [core] gitproxy=proxy-command for kernel.org gitproxy=default-proxy ; for all the rest ; HTTP [http] sslVerify [http "https://weak.example.com"] sslVerify = false cookieFile = /tmp/cookie.txt ``` you can set the filemode to true with ``` % git config core.filemode true ``` The hypothetical proxy command entries actually have a postfix to discern what URL they apply to. Here is how to change the entry for kernel.org to "ssh". ``` % git config core.gitproxy '"ssh" for kernel.org' 'for kernel.org$' ``` This makes sure that only the key/value pair for kernel.org is replaced. To delete the entry for renames, do ``` % git config --unset diff.renames ``` If you want to delete an entry for a multivar (like core.gitproxy above), you have to provide a regex matching the value of exactly one line. To query the value for a given key, do ``` % git config --get core.filemode ``` or ``` % git config core.filemode ``` or, to query a multivar: ``` % git config --get core.gitproxy "for kernel.org$" ``` If you want to know all the values for a multivar, do: ``` % git config --get-all core.gitproxy ``` If you like to live dangerously, you can replace **all** core.gitproxy by a new one with ``` % git config --replace-all core.gitproxy ssh ``` However, if you really only want to replace the line for the default proxy, i.e. the one without a "for …​" postfix, do something like this: ``` % git config core.gitproxy ssh '! for ' ``` To actually match only values with an exclamation mark, you have to ``` % git config section.key value '[!]' ``` To add a new proxy, without altering any of the existing ones, use ``` % git config --add core.gitproxy '"proxy-command" for example.com' ``` An example to use customized color from the configuration in your script: ``` #!/bin/sh WS=$(git config --get-color color.diff.whitespace "blue reverse") RESET=$(git config --get-color "" "reset") echo "${WS}your whitespace color or blue reverse${RESET}" ``` For URLs in `https://weak.example.com`, `http.sslVerify` is set to false, while it is set to `true` for all others: ``` % git config --type=bool --get-urlmatch http.sslverify https://good.example.com true % git config --type=bool --get-urlmatch http.sslverify https://weak.example.com false % git config --get-urlmatch http https://weak.example.com http.cookieFile /tmp/cookie.txt http.sslverify false ``` Configuration file ------------------ The Git configuration file contains a number of variables that affect the Git commands' behavior. The files `.git/config` and optionally `config.worktree` (see the "CONFIGURATION FILE" section of [git-worktree[1]](git-worktree)) in each repository are used to store the configuration for that repository, and `$HOME/.gitconfig` is used to store a per-user configuration as fallback values for the `.git/config` file. The file `/etc/gitconfig` can be used to store a system-wide default configuration. The configuration variables are used by both the Git plumbing and the porcelains. The variables are divided into sections, wherein the fully qualified variable name of the variable itself is the last dot-separated segment and the section name is everything before the last dot. The variable names are case-insensitive, allow only alphanumeric characters and `-`, and must start with an alphabetic character. Some variables may appear multiple times; we say then that the variable is multivalued. ### Syntax The syntax is fairly flexible and permissive; whitespaces are mostly ignored. The `#` and `;` characters begin comments to the end of line, blank lines are ignored. The file consists of sections and variables. A section begins with the name of the section in square brackets and continues until the next section begins. Section names are case-insensitive. Only alphanumeric characters, `-` and `.` are allowed in section names. Each variable must belong to some section, which means that there must be a section header before the first setting of a variable. Sections can be further divided into subsections. To begin a subsection put its name in double quotes, separated by space from the section name, in the section header, like in the example below: ``` [section "subsection"] ``` Subsection names are case sensitive and can contain any characters except newline and the null byte. Doublequote `"` and backslash can be included by escaping them as `\"` and `\\`, respectively. Backslashes preceding other characters are dropped when reading; for example, `\t` is read as `t` and `\0` is read as `0`. Section headers cannot span multiple lines. Variables may belong directly to a section or to a given subsection. You can have `[section]` if you have `[section "subsection"]`, but you don’t need to. There is also a deprecated `[section.subsection]` syntax. With this syntax, the subsection name is converted to lower-case and is also compared case sensitively. These subsection names follow the same restrictions as section names. All the other lines (and the remainder of the line after the section header) are recognized as setting variables, in the form `name = value` (or just `name`, which is a short-hand to say that the variable is the boolean "true"). The variable names are case-insensitive, allow only alphanumeric characters and `-`, and must start with an alphabetic character. A line that defines a value can be continued to the next line by ending it with a `\`; the backslash and the end-of-line are stripped. Leading whitespaces after `name =`, the remainder of the line after the first comment character `#` or `;`, and trailing whitespaces of the line are discarded unless they are enclosed in double quotes. Internal whitespaces within the value are retained verbatim. Inside double quotes, double quote `"` and backslash `\` characters must be escaped: use `\"` for `"` and `\\` for `\`. The following escape sequences (beside `\"` and `\\`) are recognized: `\n` for newline character (NL), `\t` for horizontal tabulation (HT, TAB) and `\b` for backspace (BS). Other char escape sequences (including octal escape sequences) are invalid. ### Includes The `include` and `includeIf` sections allow you to include config directives from another source. These sections behave identically to each other with the exception that `includeIf` sections may be ignored if their condition does not evaluate to true; see "Conditional includes" below. You can include a config file from another by setting the special `include.path` (or `includeIf.*.path`) variable to the name of the file to be included. The variable takes a pathname as its value, and is subject to tilde expansion. These variables can be given multiple times. The contents of the included file are inserted immediately, as if they had been found at the location of the include directive. If the value of the variable is a relative path, the path is considered to be relative to the configuration file in which the include directive was found. See below for examples. ### Conditional includes You can include a config file from another conditionally by setting a `includeIf.<condition>.path` variable to the name of the file to be included. The condition starts with a keyword followed by a colon and some data whose format and meaning depends on the keyword. Supported keywords are: `gitdir` The data that follows the keyword `gitdir:` is used as a glob pattern. If the location of the .git directory matches the pattern, the include condition is met. The .git location may be auto-discovered, or come from `$GIT_DIR` environment variable. If the repository is auto discovered via a .git file (e.g. from submodules, or a linked worktree), the .git location would be the final location where the .git directory is, not where the .git file is. The pattern can contain standard globbing wildcards and two additional ones, `**/` and `/**`, that can match multiple path components. Please refer to [gitignore[5]](gitignore) for details. For convenience: * If the pattern starts with `~/`, `~` will be substituted with the content of the environment variable `HOME`. * If the pattern starts with `./`, it is replaced with the directory containing the current config file. * If the pattern does not start with either `~/`, `./` or `/`, `**/` will be automatically prepended. For example, the pattern `foo/bar` becomes `**/foo/bar` and would match `/any/path/to/foo/bar`. * If the pattern ends with `/`, `**` will be automatically added. For example, the pattern `foo/` becomes `foo/**`. In other words, it matches "foo" and everything inside, recursively. `gitdir/i` This is the same as `gitdir` except that matching is done case-insensitively (e.g. on case-insensitive file systems) `onbranch` The data that follows the keyword `onbranch:` is taken to be a pattern with standard globbing wildcards and two additional ones, `**/` and `/**`, that can match multiple path components. If we are in a worktree where the name of the branch that is currently checked out matches the pattern, the include condition is met. If the pattern ends with `/`, `**` will be automatically added. For example, the pattern `foo/` becomes `foo/**`. In other words, it matches all branches that begin with `foo/`. This is useful if your branches are organized hierarchically and you would like to apply a configuration to all the branches in that hierarchy. `hasconfig:remote.*.url:` The data that follows this keyword is taken to be a pattern with standard globbing wildcards and two additional ones, `**/` and `/**`, that can match multiple components. The first time this keyword is seen, the rest of the config files will be scanned for remote URLs (without applying any values). If there exists at least one remote URL that matches this pattern, the include condition is met. Files included by this option (directly or indirectly) are not allowed to contain remote URLs. Note that unlike other includeIf conditions, resolving this condition relies on information that is not yet known at the point of reading the condition. A typical use case is this option being present as a system-level or global-level config, and the remote URL being in a local-level config; hence the need to scan ahead when resolving this condition. In order to avoid the chicken-and-egg problem in which potentially-included files can affect whether such files are potentially included, Git breaks the cycle by prohibiting these files from affecting the resolution of these conditions (thus, prohibiting them from declaring remote URLs). As for the naming of this keyword, it is for forwards compatibiliy with a naming scheme that supports more variable-based include conditions, but currently Git only supports the exact keyword described above. A few more notes on matching via `gitdir` and `gitdir/i`: * Symlinks in `$GIT_DIR` are not resolved before matching. * Both the symlink & realpath versions of paths will be matched outside of `$GIT_DIR`. E.g. if ~/git is a symlink to /mnt/storage/git, both `gitdir:~/git` and `gitdir:/mnt/storage/git` will match. This was not the case in the initial release of this feature in v2.13.0, which only matched the realpath version. Configuration that wants to be compatible with the initial release of this feature needs to either specify only the realpath version, or both versions. * Note that "../" is not special and will match literally, which is unlikely what you want. ### Example ``` # Core variables [core] ; Don't trust file modes filemode = false # Our diff algorithm [diff] external = /usr/local/bin/diff-wrapper renames = true [branch "devel"] remote = origin merge = refs/heads/devel # Proxy settings [core] gitProxy="ssh" for "kernel.org" gitProxy=default-proxy ; for the rest [include] path = /path/to/foo.inc ; include by absolute path path = foo.inc ; find "foo.inc" relative to the current file path = ~/foo.inc ; find "foo.inc" in your `$HOME` directory ; include if $GIT_DIR is /path/to/foo/.git [includeIf "gitdir:/path/to/foo/.git"] path = /path/to/foo.inc ; include for all repositories inside /path/to/group [includeIf "gitdir:/path/to/group/"] path = /path/to/foo.inc ; include for all repositories inside $HOME/to/group [includeIf "gitdir:~/to/group/"] path = /path/to/foo.inc ; relative paths are always relative to the including ; file (if the condition is true); their location is not ; affected by the condition [includeIf "gitdir:/path/to/group/"] path = foo.inc ; include only if we are in a worktree where foo-branch is ; currently checked out [includeIf "onbranch:foo-branch"] path = foo.inc ; include only if a remote with the given URL exists (note ; that such a URL may be provided later in a file or in a ; file read after this file is read, as seen in this example) [includeIf "hasconfig:remote.*.url:https://example.com/**"] path = foo.inc [remote "origin"] url = https://example.com/git ``` ### Values Values of many variables are treated as a simple string, but there are variables that take values of specific types and there are rules as to how to spell them. boolean When a variable is said to take a boolean value, many synonyms are accepted for `true` and `false`; these are all case-insensitive. true Boolean true literals are `yes`, `on`, `true`, and `1`. Also, a variable defined without `= <value>` is taken as true. false Boolean false literals are `no`, `off`, `false`, `0` and the empty string. When converting a value to its canonical form using the `--type=bool` type specifier, `git config` will ensure that the output is "true" or "false" (spelled in lowercase). integer The value for many variables that specify various sizes can be suffixed with `k`, `M`,…​ to mean "scale the number by 1024", "by 1024x1024", etc. color The value for a variable that takes a color is a list of colors (at most two, one for foreground and one for background) and attributes (as many as you want), separated by spaces. The basic colors accepted are `normal`, `black`, `red`, `green`, `yellow`, `blue`, `magenta`, `cyan`, `white` and `default`. The first color given is the foreground; the second is the background. All the basic colors except `normal` and `default` have a bright variant that can be specified by prefixing the color with `bright`, like `brightred`. The color `normal` makes no change to the color. It is the same as an empty string, but can be used as the foreground color when specifying a background color alone (for example, "normal red"). The color `default` explicitly resets the color to the terminal default, for example to specify a cleared background. Although it varies between terminals, this is usually not the same as setting to "white black". Colors may also be given as numbers between 0 and 255; these use ANSI 256-color mode (but note that not all terminals may support this). If your terminal supports it, you may also specify 24-bit RGB values as hex, like `#ff0ab3`. The accepted attributes are `bold`, `dim`, `ul`, `blink`, `reverse`, `italic`, and `strike` (for crossed-out or "strikethrough" letters). The position of any attributes with respect to the colors (before, after, or in between), doesn’t matter. Specific attributes may be turned off by prefixing them with `no` or `no-` (e.g., `noreverse`, `no-ul`, etc). The pseudo-attribute `reset` resets all colors and attributes before applying the specified coloring. For example, `reset green` will result in a green foreground and default background without any active attributes. An empty color string produces no color effect at all. This can be used to avoid coloring specific elements without disabling color entirely. For git’s pre-defined color slots, the attributes are meant to be reset at the beginning of each item in the colored output. So setting `color.decorate.branch` to `black` will paint that branch name in a plain `black`, even if the previous thing on the same output line (e.g. opening parenthesis before the list of branch names in `log --decorate` output) is set to be painted with `bold` or some other attribute. However, custom log formats may do more complicated and layered coloring, and the negated forms may be useful there. pathname A variable that takes a pathname value can be given a string that begins with "`~/`" or "`~user/`", and the usual tilde expansion happens to such a string: `~/` is expanded to the value of `$HOME`, and `~user/` to the specified user’s home directory. If a path starts with `%(prefix)/`, the remainder is interpreted as a path relative to Git’s "runtime prefix", i.e. relative to the location where Git itself was installed. For example, `%(prefix)/bin/` refers to the directory in which the Git executable itself lives. If Git was compiled without runtime prefix support, the compiled-in prefix will be substituted instead. In the unlikely event that a literal path needs to be specified that should `not` be expanded, it needs to be prefixed by `./`, like so: `./%(prefix)/bin`. ### Variables Note that this list is non-comprehensive and not necessarily complete. For command-specific variables, you will find a more detailed description in the appropriate manual page. Other git-related tools may and do use their own variables. When inventing new variables for use in your own tool, make sure their names do not conflict with those that are used by Git itself and other popular tools, and describe them in your documentation. advice.\* These variables control various optional help messages designed to aid new users. All `advice.*` variables default to `true`, and you can tell Git that you do not need help by setting these to `false`: ambiguousFetchRefspec Advice shown when fetch refspec for multiple remotes map to the same remote-tracking branch namespace and causes branch tracking set-up to fail. fetchShowForcedUpdates Advice shown when [git-fetch[1]](git-fetch) takes a long time to calculate forced updates after ref updates, or to warn that the check is disabled. pushUpdateRejected Set this variable to `false` if you want to disable `pushNonFFCurrent`, `pushNonFFMatching`, `pushAlreadyExists`, `pushFetchFirst`, `pushNeedsForce`, and `pushRefNeedsUpdate` simultaneously. pushNonFFCurrent Advice shown when [git-push[1]](git-push) fails due to a non-fast-forward update to the current branch. pushNonFFMatching Advice shown when you ran [git-push[1]](git-push) and pushed `matching refs` explicitly (i.e. you used `:`, or specified a refspec that isn’t your current branch) and it resulted in a non-fast-forward error. pushAlreadyExists Shown when [git-push[1]](git-push) rejects an update that does not qualify for fast-forwarding (e.g., a tag.) pushFetchFirst Shown when [git-push[1]](git-push) rejects an update that tries to overwrite a remote ref that points at an object we do not have. pushNeedsForce Shown when [git-push[1]](git-push) rejects an update that tries to overwrite a remote ref that points at an object that is not a commit-ish, or make the remote ref point at an object that is not a commit-ish. pushUnqualifiedRefname Shown when [git-push[1]](git-push) gives up trying to guess based on the source and destination refs what remote ref namespace the source belongs in, but where we can still suggest that the user push to either refs/heads/\* or refs/tags/\* based on the type of the source object. pushRefNeedsUpdate Shown when [git-push[1]](git-push) rejects a forced update of a branch when its remote-tracking ref has updates that we do not have locally. skippedCherryPicks Shown when [git-rebase[1]](git-rebase) skips a commit that has already been cherry-picked onto the upstream branch. statusAheadBehind Shown when [git-status[1]](git-status) computes the ahead/behind counts for a local ref compared to its remote tracking ref, and that calculation takes longer than expected. Will not appear if `status.aheadBehind` is false or the option `--no-ahead-behind` is given. statusHints Show directions on how to proceed from the current state in the output of [git-status[1]](git-status), in the template shown when writing commit messages in [git-commit[1]](git-commit), and in the help message shown by [git-switch[1]](git-switch) or [git-checkout[1]](git-checkout) when switching branch. statusUoption Advise to consider using the `-u` option to [git-status[1]](git-status) when the command takes more than 2 seconds to enumerate untracked files. commitBeforeMerge Advice shown when [git-merge[1]](git-merge) refuses to merge to avoid overwriting local changes. resetNoRefresh Advice to consider using the `--no-refresh` option to [git-reset[1]](git-reset) when the command takes more than 2 seconds to refresh the index after reset. resolveConflict Advice shown by various commands when conflicts prevent the operation from being performed. sequencerInUse Advice shown when a sequencer command is already in progress. implicitIdentity Advice on how to set your identity configuration when your information is guessed from the system username and domain name. detachedHead Advice shown when you used [git-switch[1]](git-switch) or [git-checkout[1]](git-checkout) to move to the detach HEAD state, to instruct how to create a local branch after the fact. suggestDetachingHead Advice shown when [git-switch[1]](git-switch) refuses to detach HEAD without the explicit `--detach` option. checkoutAmbiguousRemoteBranchName Advice shown when the argument to [git-checkout[1]](git-checkout) and [git-switch[1]](git-switch) ambiguously resolves to a remote tracking branch on more than one remote in situations where an unambiguous argument would have otherwise caused a remote-tracking branch to be checked out. See the `checkout.defaultRemote` configuration variable for how to set a given remote to used by default in some situations where this advice would be printed. amWorkDir Advice that shows the location of the patch file when [git-am[1]](git-am) fails to apply it. rmHints In case of failure in the output of [git-rm[1]](git-rm), show directions on how to proceed from the current state. addEmbeddedRepo Advice on what to do when you’ve accidentally added one git repo inside of another. ignoredHook Advice shown if a hook is ignored because the hook is not set as executable. waitingForEditor Print a message to the terminal whenever Git is waiting for editor input from the user. nestedTag Advice shown if a user attempts to recursively tag a tag object. submoduleAlternateErrorStrategyDie Advice shown when a submodule.alternateErrorStrategy option configured to "die" causes a fatal error. submodulesNotUpdated Advice shown when a user runs a submodule command that fails because `git submodule update --init` was not run. addIgnoredFile Advice shown if a user attempts to add an ignored file to the index. addEmptyPathspec Advice shown if a user runs the add command without providing the pathspec parameter. updateSparsePath Advice shown when either [git-add[1]](git-add) or [git-rm[1]](git-rm) is asked to update index entries outside the current sparse checkout. core.fileMode Tells Git if the executable bit of files in the working tree is to be honored. Some filesystems lose the executable bit when a file that is marked as executable is checked out, or checks out a non-executable file with executable bit on. [git-clone[1]](git-clone) or [git-init[1]](git-init) probe the filesystem to see if it handles the executable bit correctly and this variable is automatically set as necessary. A repository, however, may be on a filesystem that handles the filemode correctly, and this variable is set to `true` when created, but later may be made accessible from another environment that loses the filemode (e.g. exporting ext4 via CIFS mount, visiting a Cygwin created repository with Git for Windows or Eclipse). In such a case it may be necessary to set this variable to `false`. See [git-update-index[1]](git-update-index). The default is true (when core.filemode is not specified in the config file). core.hideDotFiles (Windows-only) If true, mark newly-created directories and files whose name starts with a dot as hidden. If `dotGitOnly`, only the `.git/` directory is hidden, but no other files starting with a dot. The default mode is `dotGitOnly`. core.ignoreCase Internal variable which enables various workarounds to enable Git to work better on filesystems that are not case sensitive, like APFS, HFS+, FAT, NTFS, etc. For example, if a directory listing finds "makefile" when Git expects "Makefile", Git will assume it is really the same file, and continue to remember it as "Makefile". The default is false, except [git-clone[1]](git-clone) or [git-init[1]](git-init) will probe and set core.ignoreCase true if appropriate when the repository is created. Git relies on the proper configuration of this variable for your operating and file system. Modifying this value may result in unexpected behavior. core.precomposeUnicode This option is only used by Mac OS implementation of Git. When core.precomposeUnicode=true, Git reverts the unicode decomposition of filenames done by Mac OS. This is useful when sharing a repository between Mac OS and Linux or Windows. (Git for Windows 1.7.10 or higher is needed, or Git under cygwin 1.7). When false, file names are handled fully transparent by Git, which is backward compatible with older versions of Git. core.protectHFS If set to true, do not allow checkout of paths that would be considered equivalent to `.git` on an HFS+ filesystem. Defaults to `true` on Mac OS, and `false` elsewhere. core.protectNTFS If set to true, do not allow checkout of paths that would cause problems with the NTFS filesystem, e.g. conflict with 8.3 "short" names. Defaults to `true` on Windows, and `false` elsewhere. core.fsmonitor If set to true, enable the built-in file system monitor daemon for this working directory ([git-fsmonitor--daemon[1]](git-fsmonitor--daemon)). Like hook-based file system monitors, the built-in file system monitor can speed up Git commands that need to refresh the Git index (e.g. `git status`) in a working directory with many files. The built-in monitor eliminates the need to install and maintain an external third-party tool. The built-in file system monitor is currently available only on a limited set of supported platforms. Currently, this includes Windows and MacOS. ``` Otherwise, this variable contains the pathname of the "fsmonitor" hook command. ``` This hook command is used to identify all files that may have changed since the requested date/time. This information is used to speed up git by avoiding unnecessary scanning of files that have not changed. See the "fsmonitor-watchman" section of [githooks[5]](githooks). Note that if you concurrently use multiple versions of Git, such as one version on the command line and another version in an IDE tool, that the definition of `core.fsmonitor` was extended to allow boolean values in addition to hook pathnames. Git versions 2.35.1 and prior will not understand the boolean values and will consider the "true" or "false" values as hook pathnames to be invoked. Git versions 2.26 thru 2.35.1 default to hook protocol V2 and will fall back to no fsmonitor (full scan). Git versions prior to 2.26 default to hook protocol V1 and will silently assume there were no changes to report (no scan), so status commands may report incomplete results. For this reason, it is best to upgrade all of your Git versions before using the built-in file system monitor. core.fsmonitorHookVersion Sets the protocol version to be used when invoking the "fsmonitor" hook. There are currently versions 1 and 2. When this is not set, version 2 will be tried first and if it fails then version 1 will be tried. Version 1 uses a timestamp as input to determine which files have changes since that time but some monitors like Watchman have race conditions when used with a timestamp. Version 2 uses an opaque string so that the monitor can return something that can be used to determine what files have changed without race conditions. core.trustctime If false, the ctime differences between the index and the working tree are ignored; useful when the inode change time is regularly modified by something outside Git (file system crawlers and some backup systems). See [git-update-index[1]](git-update-index). True by default. core.splitIndex If true, the split-index feature of the index will be used. See [git-update-index[1]](git-update-index). False by default. core.untrackedCache Determines what to do about the untracked cache feature of the index. It will be kept, if this variable is unset or set to `keep`. It will automatically be added if set to `true`. And it will automatically be removed, if set to `false`. Before setting it to `true`, you should check that mtime is working properly on your system. See [git-update-index[1]](git-update-index). `keep` by default, unless `feature.manyFiles` is enabled which sets this setting to `true` by default. core.checkStat When missing or is set to `default`, many fields in the stat structure are checked to detect if a file has been modified since Git looked at it. When this configuration variable is set to `minimal`, sub-second part of mtime and ctime, the uid and gid of the owner of the file, the inode number (and the device number, if Git was compiled to use it), are excluded from the check among these fields, leaving only the whole-second part of mtime (and ctime, if `core.trustCtime` is set) and the filesize to be checked. There are implementations of Git that do not leave usable values in some fields (e.g. JGit); by excluding these fields from the comparison, the `minimal` mode may help interoperability when the same repository is used by these other systems at the same time. core.quotePath Commands that output paths (e.g. `ls-files`, `diff`), will quote "unusual" characters in the pathname by enclosing the pathname in double-quotes and escaping those characters with backslashes in the same way C escapes control characters (e.g. `\t` for TAB, `\n` for LF, `\\` for backslash) or bytes with values larger than 0x80 (e.g. octal `\302\265` for "micro" in UTF-8). If this variable is set to false, bytes higher than 0x80 are not considered "unusual" any more. Double-quotes, backslash and control characters are always escaped regardless of the setting of this variable. A simple space character is not considered "unusual". Many commands can output pathnames completely verbatim using the `-z` option. The default value is true. core.eol Sets the line ending type to use in the working directory for files that are marked as text (either by having the `text` attribute set, or by having `text=auto` and Git auto-detecting the contents as text). Alternatives are `lf`, `crlf` and `native`, which uses the platform’s native line ending. The default value is `native`. See [gitattributes[5]](gitattributes) for more information on end-of-line conversion. Note that this value is ignored if `core.autocrlf` is set to `true` or `input`. core.safecrlf If true, makes Git check if converting `CRLF` is reversible when end-of-line conversion is active. Git will verify if a command modifies a file in the work tree either directly or indirectly. For example, committing a file followed by checking out the same file should yield the original file in the work tree. If this is not the case for the current setting of `core.autocrlf`, Git will reject the file. The variable can be set to "warn", in which case Git will only warn about an irreversible conversion but continue the operation. CRLF conversion bears a slight chance of corrupting data. When it is enabled, Git will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by Git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell Git that this file is binary and Git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. Note, this safety check does not mean that a checkout will generate a file identical to the original file for a different setting of `core.eol` and `core.autocrlf`, but only for the current one. For example, a text file with `LF` would be accepted with `core.eol=lf` and could later be checked out with `core.eol=crlf`, in which case the resulting file would contain `CRLF`, although the original file contained `LF`. However, in both work trees the line endings would be consistent, that is either all `LF` or all `CRLF`, but never mixed. A file with mixed line endings would be reported by the `core.safecrlf` mechanism. core.autocrlf Setting this variable to "true" is the same as setting the `text` attribute to "auto" on all files and core.eol to "crlf". Set to true if you want to have `CRLF` line endings in your working directory and the repository has LF line endings. This variable can be set to `input`, in which case no output conversion is performed. core.checkRoundtripEncoding A comma and/or whitespace separated list of encodings that Git performs UTF-8 round trip checks on if they are used in an `working-tree-encoding` attribute (see [gitattributes[5]](gitattributes)). The default value is `SHIFT-JIS`. core.symlinks If false, symbolic links are checked out as small plain files that contain the link text. [git-update-index[1]](git-update-index) and [git-add[1]](git-add) will not change the recorded type to regular file. Useful on filesystems like FAT that do not support symbolic links. The default is true, except [git-clone[1]](git-clone) or [git-init[1]](git-init) will probe and set core.symlinks false if appropriate when the repository is created. core.gitProxy A "proxy command" to execute (as `command host port`) instead of establishing direct connection to the remote server when using the Git protocol for fetching. If the variable value is in the "COMMAND for DOMAIN" format, the command is applied only on hostnames ending with the specified domain string. This variable may be set multiple times and is matched in the given order; the first match wins. Can be overridden by the `GIT_PROXY_COMMAND` environment variable (which always applies universally, without the special "for" handling). The special string `none` can be used as the proxy command to specify that no proxy be used for a given domain pattern. This is useful for excluding servers inside a firewall from proxy use, while defaulting to a common proxy for external domains. core.sshCommand If this variable is set, `git fetch` and `git push` will use the specified command instead of `ssh` when they need to connect to a remote system. The command is in the same form as the `GIT_SSH_COMMAND` environment variable and is overridden when the environment variable is set. core.ignoreStat If true, Git will avoid using lstat() calls to detect if files have changed by setting the "assume-unchanged" bit for those tracked files which it has updated identically in both the index and working tree. When files are modified outside of Git, the user will need to stage the modified files explicitly (e.g. see `Examples` section in [git-update-index[1]](git-update-index)). Git will not normally detect changes to those files. This is useful on systems where lstat() calls are very slow, such as CIFS/Microsoft Windows. False by default. core.preferSymlinkRefs Instead of the default "symref" format for HEAD and other symbolic reference files, use symbolic links. This is sometimes needed to work with old scripts that expect HEAD to be a symbolic link. core.alternateRefsCommand When advertising tips of available history from an alternate, use the shell to execute the specified command instead of [git-for-each-ref[1]](git-for-each-ref). The first argument is the absolute path of the alternate. Output must contain one hex object id per line (i.e., the same as produced by `git for-each-ref --format='%(objectname)'`). Note that you cannot generally put `git for-each-ref` directly into the config value, as it does not take a repository path as an argument (but you can wrap the command above in a shell script). core.alternateRefsPrefixes When listing references from an alternate, list only references that begin with the given prefix. Prefixes match as if they were given as arguments to [git-for-each-ref[1]](git-for-each-ref). To list multiple prefixes, separate them with whitespace. If `core.alternateRefsCommand` is set, setting `core.alternateRefsPrefixes` has no effect. core.bare If true this repository is assumed to be `bare` and has no working directory associated with it. If this is the case a number of commands that require a working directory will be disabled, such as [git-add[1]](git-add) or [git-merge[1]](git-merge). This setting is automatically guessed by [git-clone[1]](git-clone) or [git-init[1]](git-init) when the repository was created. By default a repository that ends in "/.git" is assumed to be not bare (bare = false), while all other repositories are assumed to be bare (bare = true). core.worktree Set the path to the root of the working tree. If `GIT_COMMON_DIR` environment variable is set, core.worktree is ignored and not used for determining the root of working tree. This can be overridden by the `GIT_WORK_TREE` environment variable and the `--work-tree` command-line option. The value can be an absolute path or relative to the path to the .git directory, which is either specified by --git-dir or GIT\_DIR, or automatically discovered. If --git-dir or GIT\_DIR is specified but none of --work-tree, GIT\_WORK\_TREE and core.worktree is specified, the current working directory is regarded as the top level of your working tree. Note that this variable is honored even when set in a configuration file in a ".git" subdirectory of a directory and its value differs from the latter directory (e.g. "/path/to/.git/config" has core.worktree set to "/different/path"), which is most likely a misconfiguration. Running Git commands in the "/path/to" directory will still use "/different/path" as the root of the work tree and can cause confusion unless you know what you are doing (e.g. you are creating a read-only snapshot of the same index to a location different from the repository’s usual working tree). core.logAllRefUpdates Enable the reflog. Updates to a ref <ref> is logged to the file "`$GIT_DIR/logs/<ref>`", by appending the new and old SHA-1, the date/time and the reason of the update, but only when the file exists. If this configuration variable is set to `true`, missing "`$GIT_DIR/logs/<ref>`" file is automatically created for branch heads (i.e. under `refs/heads/`), remote refs (i.e. under `refs/remotes/`), note refs (i.e. under `refs/notes/`), and the symbolic ref `HEAD`. If it is set to `always`, then a missing reflog is automatically created for any ref under `refs/`. This information can be used to determine what commit was the tip of a branch "2 days ago". This value is true by default in a repository that has a working directory associated with it, and false by default in a bare repository. core.repositoryFormatVersion Internal variable identifying the repository format and layout version. core.sharedRepository When `group` (or `true`), the repository is made shareable between several users in a group (making sure all the files and objects are group-writable). When `all` (or `world` or `everybody`), the repository will be readable by all users, additionally to being group-shareable. When `umask` (or `false`), Git will use permissions reported by umask(2). When `0xxx`, where `0xxx` is an octal number, files in the repository will have this mode value. `0xxx` will override user’s umask value (whereas the other options will only override requested parts of the user’s umask value). Examples: `0660` will make the repo read/write-able for the owner and group, but inaccessible to others (equivalent to `group` unless umask is e.g. `0022`). `0640` is a repository that is group-readable but not group-writable. See [git-init[1]](git-init). False by default. core.warnAmbiguousRefs If true, Git will warn you if the ref name you passed it is ambiguous and might match multiple refs in the repository. True by default. core.compression An integer -1..9, indicating a default compression level. -1 is the zlib default. 0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. If set, this provides a default to other compression variables, such as `core.looseCompression` and `pack.compression`. core.looseCompression An integer -1..9, indicating the compression level for objects that are not in a pack file. -1 is the zlib default. 0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. If not set, defaults to core.compression. If that is not set, defaults to 1 (best speed). core.packedGitWindowSize Number of bytes of a pack file to map into memory in a single mapping operation. Larger window sizes may allow your system to process a smaller number of large pack files more quickly. Smaller window sizes will negatively affect performance due to increased calls to the operating system’s memory manager, but may improve performance when accessing a large number of large pack files. Default is 1 MiB if NO\_MMAP was set at compile time, otherwise 32 MiB on 32 bit platforms and 1 GiB on 64 bit platforms. This should be reasonable for all users/operating systems. You probably do not need to adjust this value. Common unit suffixes of `k`, `m`, or `g` are supported. core.packedGitLimit Maximum number of bytes to map simultaneously into memory from pack files. If Git needs to access more than this many bytes at once to complete an operation it will unmap existing regions to reclaim virtual address space within the process. Default is 256 MiB on 32 bit platforms and 32 TiB (effectively unlimited) on 64 bit platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value. Common unit suffixes of `k`, `m`, or `g` are supported. core.deltaBaseCacheLimit Maximum number of bytes per thread to reserve for caching base objects that may be referenced by multiple deltified objects. By storing the entire decompressed base objects in a cache Git is able to avoid unpacking and decompressing frequently used base objects multiple times. Default is 96 MiB on all platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value. Common unit suffixes of `k`, `m`, or `g` are supported. core.bigFileThreshold The size of files considered "big", which as discussed below changes the behavior of numerous git commands, as well as how such files are stored within the repository. The default is 512 MiB. Common unit suffixes of `k`, `m`, or `g` are supported. Files above the configured limit will be: * Stored deflated in packfiles, without attempting delta compression. The default limit is primarily set with this use-case in mind. With it, most projects will have their source code and other text files delta compressed, but not larger binary media files. Storing large files without delta compression avoids excessive memory usage, at the slight expense of increased disk usage. * Will be treated as if they were labeled "binary" (see [gitattributes[5]](gitattributes)). e.g. [git-log[1]](git-log) and [git-diff[1]](git-diff) will not compute diffs for files above this limit. * Will generally be streamed when written, which avoids excessive memory usage, at the cost of some fixed overhead. Commands that make use of this include [git-archive[1]](git-archive), [git-fast-import[1]](git-fast-import), [git-index-pack[1]](git-index-pack), [git-unpack-objects[1]](git-unpack-objects) and [git-fsck[1]](git-fsck). core.excludesFile Specifies the pathname to the file that contains patterns to describe paths that are not meant to be tracked, in addition to `.gitignore` (per-directory) and `.git/info/exclude`. Defaults to `$XDG_CONFIG_HOME/git/ignore`. If `$XDG_CONFIG_HOME` is either not set or empty, `$HOME/.config/git/ignore` is used instead. See [gitignore[5]](gitignore). core.askPass Some commands (e.g. svn and http interfaces) that interactively ask for a password can be told to use an external program given via the value of this variable. Can be overridden by the `GIT_ASKPASS` environment variable. If not set, fall back to the value of the `SSH_ASKPASS` environment variable or, failing that, a simple password prompt. The external program shall be given a suitable prompt as command-line argument and write the password on its STDOUT. core.attributesFile In addition to `.gitattributes` (per-directory) and `.git/info/attributes`, Git looks into this file for attributes (see [gitattributes[5]](gitattributes)). Path expansions are made the same way as for `core.excludesFile`. Its default value is `$XDG_CONFIG_HOME/git/attributes`. If `$XDG_CONFIG_HOME` is either not set or empty, `$HOME/.config/git/attributes` is used instead. core.hooksPath By default Git will look for your hooks in the `$GIT_DIR/hooks` directory. Set this to different path, e.g. `/etc/git/hooks`, and Git will try to find your hooks in that directory, e.g. `/etc/git/hooks/pre-receive` instead of in `$GIT_DIR/hooks/pre-receive`. The path can be either absolute or relative. A relative path is taken as relative to the directory where the hooks are run (see the "DESCRIPTION" section of [githooks[5]](githooks)). This configuration variable is useful in cases where you’d like to centrally configure your Git hooks instead of configuring them on a per-repository basis, or as a more flexible and centralized alternative to having an `init.templateDir` where you’ve changed default hooks. core.editor Commands such as `commit` and `tag` that let you edit messages by launching an editor use the value of this variable when it is set, and the environment variable `GIT_EDITOR` is not set. See [git-var[1]](git-var). core.commentChar Commands such as `commit` and `tag` that let you edit messages consider a line that begins with this character commented, and removes them after the editor returns (default `#`). If set to "auto", `git-commit` would select a character that is not the beginning character of any line in existing commit messages. core.filesRefLockTimeout The length of time, in milliseconds, to retry when trying to lock an individual reference. Value 0 means not to retry at all; -1 means to try indefinitely. Default is 100 (i.e., retry for 100ms). core.packedRefsTimeout The length of time, in milliseconds, to retry when trying to lock the `packed-refs` file. Value 0 means not to retry at all; -1 means to try indefinitely. Default is 1000 (i.e., retry for 1 second). core.pager Text viewer for use by Git commands (e.g., `less`). The value is meant to be interpreted by the shell. The order of preference is the `$GIT_PAGER` environment variable, then `core.pager` configuration, then `$PAGER`, and then the default chosen at compile time (usually `less`). When the `LESS` environment variable is unset, Git sets it to `FRX` (if `LESS` environment variable is set, Git does not change it at all). If you want to selectively override Git’s default setting for `LESS`, you can set `core.pager` to e.g. `less -S`. This will be passed to the shell by Git, which will translate the final command to `LESS=FRX less -S`. The environment does not set the `S` option but the command line does, instructing less to truncate long lines. Similarly, setting `core.pager` to `less -+F` will deactivate the `F` option specified by the environment from the command-line, deactivating the "quit if one screen" behavior of `less`. One can specifically activate some flags for particular commands: for example, setting `pager.blame` to `less -S` enables line truncation only for `git blame`. Likewise, when the `LV` environment variable is unset, Git sets it to `-c`. You can override this setting by exporting `LV` with another value or setting `core.pager` to `lv +c`. core.whitespace A comma separated list of common whitespace problems to notice. `git diff` will use `color.diff.whitespace` to highlight them, and `git apply --whitespace=error` will consider them as errors. You can prefix `-` to disable any of them (e.g. `-trailing-space`): * `blank-at-eol` treats trailing whitespaces at the end of the line as an error (enabled by default). * `space-before-tab` treats a space character that appears immediately before a tab character in the initial indent part of the line as an error (enabled by default). * `indent-with-non-tab` treats a line that is indented with space characters instead of the equivalent tabs as an error (not enabled by default). * `tab-in-indent` treats a tab character in the initial indent part of the line as an error (not enabled by default). * `blank-at-eof` treats blank lines added at the end of file as an error (enabled by default). * `trailing-space` is a short-hand to cover both `blank-at-eol` and `blank-at-eof`. * `cr-at-eol` treats a carriage-return at the end of line as part of the line terminator, i.e. with it, `trailing-space` does not trigger if the character before such a carriage-return is not a whitespace (not enabled by default). * `tabwidth=<n>` tells how many character positions a tab occupies; this is relevant for `indent-with-non-tab` and when Git fixes `tab-in-indent` errors. The default tab width is 8. Allowed values are 1 to 63. core.fsync A comma-separated list of components of the repository that should be hardened via the core.fsyncMethod when created or modified. You can disable hardening of any component by prefixing it with a `-`. Items that are not hardened may be lost in the event of an unclean system shutdown. Unless you have special requirements, it is recommended that you leave this option empty or pick one of `committed`, `added`, or `all`. When this configuration is encountered, the set of components starts with the platform default value, disabled components are removed, and additional components are added. `none` resets the state so that the platform default is ignored. The empty string resets the fsync configuration to the platform default. The default on most platforms is equivalent to `core.fsync=committed,-loose-object`, which has good performance, but risks losing recent work in the event of an unclean system shutdown. * `none` clears the set of fsynced components. * `loose-object` hardens objects added to the repo in loose-object form. * `pack` hardens objects added to the repo in packfile form. * `pack-metadata` hardens packfile bitmaps and indexes. * `commit-graph` hardens the commit-graph file. * `index` hardens the index when it is modified. * `objects` is an aggregate option that is equivalent to `loose-object,pack`. * `reference` hardens references modified in the repo. * `derived-metadata` is an aggregate option that is equivalent to `pack-metadata,commit-graph`. * `committed` is an aggregate option that is currently equivalent to `objects`. This mode sacrifices some performance to ensure that work that is committed to the repository with `git commit` or similar commands is hardened. * `added` is an aggregate option that is currently equivalent to `committed,index`. This mode sacrifices additional performance to ensure that the results of commands like `git add` and similar operations are hardened. * `all` is an aggregate option that syncs all individual components above. core.fsyncMethod A value indicating the strategy Git will use to harden repository data using fsync and related primitives. * `fsync` uses the fsync() system call or platform equivalents. * `writeout-only` issues pagecache writeback requests, but depending on the filesystem and storage hardware, data added to the repository may not be durable in the event of a system crash. This is the default mode on macOS. * `batch` enables a mode that uses writeout-only flushes to stage multiple updates in the disk writeback cache and then does a single full fsync of a dummy file to trigger the disk cache flush at the end of the operation. Currently `batch` mode only applies to loose-object files. Other repository data is made durable as if `fsync` was specified. This mode is expected to be as safe as `fsync` on macOS for repos stored on HFS+ or APFS filesystems and on Windows for repos stored on NTFS or ReFS filesystems. core.fsyncObjectFiles This boolean will enable `fsync()` when writing object files. This setting is deprecated. Use core.fsync instead. This setting affects data added to the Git repository in loose-object form. When set to true, Git will issue an fsync or similar system call to flush caches so that loose-objects remain consistent in the face of a unclean system shutdown. core.preloadIndex Enable parallel index preload for operations like `git diff` This can speed up operations like `git diff` and `git status` especially on filesystems like NFS that have weak caching semantics and thus relatively high IO latencies. When enabled, Git will do the index comparison to the filesystem data in parallel, allowing overlapping IO’s. Defaults to true. core.unsetenvvars Windows-only: comma-separated list of environment variables' names that need to be unset before spawning any other process. Defaults to `PERL5LIB` to account for the fact that Git for Windows insists on using its own Perl interpreter. core.restrictinheritedhandles Windows-only: override whether spawned processes inherit only standard file handles (`stdin`, `stdout` and `stderr`) or all handles. Can be `auto`, `true` or `false`. Defaults to `auto`, which means `true` on Windows 7 and later, and `false` on older Windows versions. core.createObject You can set this to `link`, in which case a hardlink followed by a delete of the source are used to make sure that object creation will not overwrite existing objects. On some file system/operating system combinations, this is unreliable. Set this config setting to `rename` there; However, This will remove the check that makes sure that existing object files will not get overwritten. core.notesRef When showing commit messages, also show notes which are stored in the given ref. The ref must be fully qualified. If the given ref does not exist, it is not an error but means that no notes should be printed. This setting defaults to "refs/notes/commits", and it can be overridden by the `GIT_NOTES_REF` environment variable. See [git-notes[1]](git-notes). core.commitGraph If true, then git will read the commit-graph file (if it exists) to parse the graph structure of commits. Defaults to true. See [git-commit-graph[1]](git-commit-graph) for more information. core.useReplaceRefs If set to `false`, behave as if the `--no-replace-objects` option was given on the command line. See [git[1]](git) and [git-replace[1]](git-replace) for more information. core.multiPackIndex Use the multi-pack-index file to track multiple packfiles using a single index. See [git-multi-pack-index[1]](git-multi-pack-index) for more information. Defaults to true. core.sparseCheckout Enable "sparse checkout" feature. See [git-sparse-checkout[1]](git-sparse-checkout) for more information. core.sparseCheckoutCone Enables the "cone mode" of the sparse checkout feature. When the sparse-checkout file contains a limited set of patterns, this mode provides significant performance advantages. The "non-cone mode" can be requested to allow specifying more flexible patterns by setting this variable to `false`. See [git-sparse-checkout[1]](git-sparse-checkout) for more information. core.abbrev Set the length object names are abbreviated to. If unspecified or set to "auto", an appropriate value is computed based on the approximate number of packed objects in your repository, which hopefully is enough for abbreviated object names to stay unique for some time. If set to "no", no abbreviation is made and the object names are shown in their full length. The minimum length is 4. add.ignoreErrors add.ignore-errors (deprecated) Tells `git add` to continue adding files when some files cannot be added due to indexing errors. Equivalent to the `--ignore-errors` option of [git-add[1]](git-add). `add.ignore-errors` is deprecated, as it does not follow the usual naming convention for configuration variables. add.interactive.useBuiltin Set to `false` to fall back to the original Perl implementation of the interactive version of [git-add[1]](git-add) instead of the built-in version. Is `true` by default. alias.\* Command aliases for the [git[1]](git) command wrapper - e.g. after defining `alias.last = cat-file commit HEAD`, the invocation `git last` is equivalent to `git cat-file commit HEAD`. To avoid confusion and troubles with script usage, aliases that hide existing Git commands are ignored. Arguments are split by spaces, the usual shell quoting and escaping is supported. A quote pair or a backslash can be used to quote them. Note that the first word of an alias does not necessarily have to be a command. It can be a command-line option that will be passed into the invocation of `git`. In particular, this is useful when used with `-c` to pass in one-time configurations or `-p` to force pagination. For example, `loud-rebase = -c commit.verbose=true rebase` can be defined such that running `git loud-rebase` would be equivalent to `git -c commit.verbose=true rebase`. Also, `ps = -p status` would be a helpful alias since `git ps` would paginate the output of `git status` where the original command does not. If the alias expansion is prefixed with an exclamation point, it will be treated as a shell command. For example, defining `alias.new = !gitk --all --not ORIG_HEAD`, the invocation `git new` is equivalent to running the shell command `gitk --all --not ORIG_HEAD`. Note that shell commands will be executed from the top-level directory of a repository, which may not necessarily be the current directory. `GIT_PREFIX` is set as returned by running `git rev-parse --show-prefix` from the original current directory. See [git-rev-parse[1]](git-rev-parse). am.keepcr If true, git-am will call git-mailsplit for patches in mbox format with parameter `--keep-cr`. In this case git-mailsplit will not remove `\r` from lines ending with `\r\n`. Can be overridden by giving `--no-keep-cr` from the command line. See [git-am[1]](git-am), [git-mailsplit[1]](git-mailsplit). am.threeWay By default, `git am` will fail if the patch does not apply cleanly. When set to true, this setting tells `git am` to fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally (equivalent to giving the `--3way` option from the command line). Defaults to `false`. See [git-am[1]](git-am). apply.ignoreWhitespace When set to `change`, tells `git apply` to ignore changes in whitespace, in the same way as the `--ignore-space-change` option. When set to one of: no, none, never, false tells `git apply` to respect all whitespace differences. See [git-apply[1]](git-apply). apply.whitespace Tells `git apply` how to handle whitespaces, in the same way as the `--whitespace` option. See [git-apply[1]](git-apply). blame.blankBoundary Show blank commit object name for boundary commits in [git-blame[1]](git-blame). This option defaults to false. blame.coloring This determines the coloring scheme to be applied to blame output. It can be `repeatedLines`, `highlightRecent`, or `none` which is the default. blame.date Specifies the format used to output dates in [git-blame[1]](git-blame). If unset the iso format is used. For supported values, see the discussion of the `--date` option at [git-log[1]](git-log). blame.showEmail Show the author email instead of author name in [git-blame[1]](git-blame). This option defaults to false. blame.showRoot Do not treat root commits as boundaries in [git-blame[1]](git-blame). This option defaults to false. blame.ignoreRevsFile Ignore revisions listed in the file, one unabbreviated object name per line, in [git-blame[1]](git-blame). Whitespace and comments beginning with `#` are ignored. This option may be repeated multiple times. Empty file names will reset the list of ignored revisions. This option will be handled before the command line option `--ignore-revs-file`. blame.markUnblamableLines Mark lines that were changed by an ignored revision that we could not attribute to another commit with a `*` in the output of [git-blame[1]](git-blame). blame.markIgnoredLines Mark lines that were changed by an ignored revision that we attributed to another commit with a `?` in the output of [git-blame[1]](git-blame). branch.autoSetupMerge Tells `git branch`, `git switch` and `git checkout` to set up new branches so that [git-pull[1]](git-pull) will appropriately merge from the starting point branch. Note that even if this option is not set, this behavior can be chosen per-branch using the `--track` and `--no-track` options. The valid settings are: `false` — no automatic setup is done; `true` — automatic setup is done when the starting point is a remote-tracking branch; `always` — automatic setup is done when the starting point is either a local branch or remote-tracking branch; `inherit` — if the starting point has a tracking configuration, it is copied to the new branch; `simple` — automatic setup is done only when the starting point is a remote-tracking branch and the new branch has the same name as the remote branch. This option defaults to true. branch.autoSetupRebase When a new branch is created with `git branch`, `git switch` or `git checkout` that tracks another branch, this variable tells Git to set up pull to rebase instead of merge (see "branch.<name>.rebase"). When `never`, rebase is never automatically set to true. When `local`, rebase is set to true for tracked branches of other local branches. When `remote`, rebase is set to true for tracked branches of remote-tracking branches. When `always`, rebase will be set to true for all tracking branches. See "branch.autoSetupMerge" for details on how to set up a branch to track another branch. This option defaults to never. branch.sort This variable controls the sort ordering of branches when displayed by [git-branch[1]](git-branch). Without the "--sort=<value>" option provided, the value of this variable will be used as the default. See [git-for-each-ref[1]](git-for-each-ref) field names for valid values. branch.<name>.remote When on branch <name>, it tells `git fetch` and `git push` which remote to fetch from/push to. The remote to push to may be overridden with `remote.pushDefault` (for all branches). The remote to push to, for the current branch, may be further overridden by `branch.<name>.pushRemote`. If no remote is configured, or if you are not on any branch and there is more than one remote defined in the repository, it defaults to `origin` for fetching and `remote.pushDefault` for pushing. Additionally, `.` (a period) is the current local repository (a dot-repository), see `branch.<name>.merge`'s final note below. branch.<name>.pushRemote When on branch <name>, it overrides `branch.<name>.remote` for pushing. It also overrides `remote.pushDefault` for pushing from branch <name>. When you pull from one place (e.g. your upstream) and push to another place (e.g. your own publishing repository), you would want to set `remote.pushDefault` to specify the remote to push to for all branches, and use this option to override it for a specific branch. branch.<name>.merge Defines, together with branch.<name>.remote, the upstream branch for the given branch. It tells `git fetch`/`git pull`/`git rebase` which branch to merge and can also affect `git push` (see push.default). When in branch <name>, it tells `git fetch` the default refspec to be marked for merging in FETCH\_HEAD. The value is handled like the remote part of a refspec, and must match a ref which is fetched from the remote given by "branch.<name>.remote". The merge information is used by `git pull` (which at first calls `git fetch`) to lookup the default branch for merging. Without this option, `git pull` defaults to merge the first refspec fetched. Specify multiple values to get an octopus merge. If you wish to setup `git pull` so that it merges into <name> from another branch in the local repository, you can point branch.<name>.merge to the desired branch, and use the relative path setting `.` (a period) for branch.<name>.remote. branch.<name>.mergeOptions Sets default options for merging into branch <name>. The syntax and supported options are the same as those of [git-merge[1]](git-merge), but option values containing whitespace characters are currently not supported. branch.<name>.rebase When true, rebase the branch <name> on top of the fetched branch, instead of merging the default branch from the default remote when "git pull" is run. See "pull.rebase" for doing this in a non branch-specific manner. When `merges` (or just `m`), pass the `--rebase-merges` option to `git rebase` so that the local merge commits are included in the rebase (see [git-rebase[1]](git-rebase) for details). When the value is `interactive` (or just `i`), the rebase is run in interactive mode. **NOTE**: this is a possibly dangerous operation; do **not** use it unless you understand the implications (see [git-rebase[1]](git-rebase) for details). branch.<name>.description Branch description, can be edited with `git branch --edit-description`. Branch description is automatically added in the format-patch cover letter or request-pull summary. browser.<tool>.cmd Specify the command to invoke the specified browser. The specified command is evaluated in shell with the URLs passed as arguments. (See [git-web--browse[1]](git-web--browse).) browser.<tool>.path Override the path for the given tool that may be used to browse HTML help (see `-w` option in [git-help[1]](git-help)) or a working repository in gitweb (see [git-instaweb[1]](git-instaweb)). bundle.\* The `bundle.*` keys may appear in a bundle list file found via the `git clone --bundle-uri` option. These keys currently have no effect if placed in a repository config file, though this will change in the future. See [the bundle URI design document](bundle-uri) for more details. bundle.version This integer value advertises the version of the bundle list format used by the bundle list. Currently, the only accepted value is `1`. bundle.mode This string value should be either `all` or `any`. This value describes whether all of the advertised bundles are required to unbundle a complete understanding of the bundled information (`all`) or if any one of the listed bundle URIs is sufficient (`any`). bundle.<id>.\* The `bundle.<id>.*` keys are used to describe a single item in the bundle list, grouped under `<id>` for identification purposes. bundle.<id>.uri This string value defines the URI by which Git can reach the contents of this `<id>`. This URI may be a bundle file or another bundle list. checkout.defaultRemote When you run `git checkout <something>` or `git switch <something>` and only have one remote, it may implicitly fall back on checking out and tracking e.g. `origin/<something>`. This stops working as soon as you have more than one remote with a `<something>` reference. This setting allows for setting the name of a preferred remote that should always win when it comes to disambiguation. The typical use-case is to set this to `origin`. Currently this is used by [git-switch[1]](git-switch) and [git-checkout[1]](git-checkout) when `git checkout <something>` or `git switch <something>` will checkout the `<something>` branch on another remote, and by [git-worktree[1]](git-worktree) when `git worktree add` refers to a remote branch. This setting might be used for other checkout-like commands or functionality in the future. checkout.guess Provides the default value for the `--guess` or `--no-guess` option in `git checkout` and `git switch`. See [git-switch[1]](git-switch) and [git-checkout[1]](git-checkout). checkout.workers The number of parallel workers to use when updating the working tree. The default is one, i.e. sequential execution. If set to a value less than one, Git will use as many workers as the number of logical cores available. This setting and `checkout.thresholdForParallelism` affect all commands that perform checkout. E.g. checkout, clone, reset, sparse-checkout, etc. Note: parallel checkout usually delivers better performance for repositories located on SSDs or over NFS. For repositories on spinning disks and/or machines with a small number of cores, the default sequential checkout often performs better. The size and compression level of a repository might also influence how well the parallel version performs. checkout.thresholdForParallelism When running parallel checkout with a small number of files, the cost of subprocess spawning and inter-process communication might outweigh the parallelization gains. This setting allows to define the minimum number of files for which parallel checkout should be attempted. The default is 100. clean.requireForce A boolean to make git-clean do nothing unless given -f, -i or -n. Defaults to true. clone.defaultRemoteName The name of the remote to create when cloning a repository. Defaults to `origin`, and can be overridden by passing the `--origin` command-line option to [git-clone[1]](git-clone). clone.rejectShallow Reject to clone a repository if it is a shallow one, can be overridden by passing option `--reject-shallow` in command line. See [git-clone[1]](git-clone) clone.filterSubmodules If a partial clone filter is provided (see `--filter` in [git-rev-list[1]](git-rev-list)) and `--recurse-submodules` is used, also apply the filter to submodules. color.advice A boolean to enable/disable color in hints (e.g. when a push failed, see `advice.*` for a list). May be set to `always`, `false` (or `never`) or `auto` (or `true`), in which case colors are used only when the error output goes to a terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.advice.hint Use customized color for hints. color.blame.highlightRecent Specify the line annotation color for `git blame --color-by-age` depending upon the age of the line. This setting should be set to a comma-separated list of color and date settings, starting and ending with a color, the dates should be set from oldest to newest. The metadata will be colored with the specified colors if the line was introduced before the given timestamp, overwriting older timestamped colors. Instead of an absolute timestamp relative timestamps work as well, e.g. `2.weeks.ago` is valid to address anything older than 2 weeks. It defaults to `blue,12 month ago,white,1 month ago,red`, which colors everything older than one year blue, recent changes between one month and one year old are kept white, and lines introduced within the last month are colored red. color.blame.repeatedLines Use the specified color to colorize line annotations for `git blame --color-lines`, if they come from the same commit as the preceding line. Defaults to cyan. color.branch A boolean to enable/disable color in the output of [git-branch[1]](git-branch). May be set to `always`, `false` (or `never`) or `auto` (or `true`), in which case colors are used only when the output is to a terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.branch.<slot> Use customized color for branch coloration. `<slot>` is one of `current` (the current branch), `local` (a local branch), `remote` (a remote-tracking branch in refs/remotes/), `upstream` (upstream tracking branch), `plain` (other refs). color.diff Whether to use ANSI escape sequences to add color to patches. If this is set to `always`, [git-diff[1]](git-diff), [git-log[1]](git-log), and [git-show[1]](git-show) will use color for all patches. If it is set to `true` or `auto`, those commands will only use color when output is to the terminal. If unset, then the value of `color.ui` is used (`auto` by default). This does not affect [git-format-patch[1]](git-format-patch) or the `git-diff-*` plumbing commands. Can be overridden on the command line with the `--color[=<when>]` option. color.diff.<slot> Use customized color for diff colorization. `<slot>` specifies which part of the patch to use the specified color, and is one of `context` (context text - `plain` is a historical synonym), `meta` (metainformation), `frag` (hunk header), `func` (function in hunk header), `old` (removed lines), `new` (added lines), `commit` (commit headers), `whitespace` (highlighting whitespace errors), `oldMoved` (deleted lines), `newMoved` (added lines), `oldMovedDimmed`, `oldMovedAlternative`, `oldMovedAlternativeDimmed`, `newMovedDimmed`, `newMovedAlternative` `newMovedAlternativeDimmed` (See the `<mode>` setting of `--color-moved` in [git-diff[1]](git-diff) for details), `contextDimmed`, `oldDimmed`, `newDimmed`, `contextBold`, `oldBold`, and `newBold` (see [git-range-diff[1]](git-range-diff) for details). color.decorate.<slot> Use customized color for `git log --decorate` output. `<slot>` is one of `branch`, `remoteBranch`, `tag`, `stash` or `HEAD` for local branches, remote-tracking branches, tags, stash and HEAD, respectively and `grafted` for grafted commits. color.grep When set to `always`, always highlight matches. When `false` (or `never`), never. When set to `true` or `auto`, use color only when the output is written to the terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.grep.<slot> Use customized color for grep colorization. `<slot>` specifies which part of the line to use the specified color, and is one of `context` non-matching text in context lines (when using `-A`, `-B`, or `-C`) `filename` filename prefix (when not using `-h`) `function` function name lines (when using `-p`) `lineNumber` line number prefix (when using `-n`) `column` column number prefix (when using `--column`) `match` matching text (same as setting `matchContext` and `matchSelected`) `matchContext` matching text in context lines `matchSelected` matching text in selected lines. Also, used to customize the following [git-log[1]](git-log) subcommands: `--grep`, `--author` and `--committer`. `selected` non-matching text in selected lines. Also, used to customize the following [git-log[1]](git-log) subcommands: `--grep`, `--author` and `--committer`. `separator` separators between fields on a line (`:`, `-`, and `=`) and between hunks (`--`) color.interactive When set to `always`, always use colors for interactive prompts and displays (such as those used by "git-add --interactive" and "git-clean --interactive"). When false (or `never`), never. When set to `true` or `auto`, use colors only when the output is to the terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.interactive.<slot> Use customized color for `git add --interactive` and `git clean --interactive` output. `<slot>` may be `prompt`, `header`, `help` or `error`, for four distinct types of normal output from interactive commands. color.pager A boolean to specify whether `auto` color modes should colorize output going to the pager. Defaults to true; set this to false if your pager does not understand ANSI color codes. color.push A boolean to enable/disable color in push errors. May be set to `always`, `false` (or `never`) or `auto` (or `true`), in which case colors are used only when the error output goes to a terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.push.error Use customized color for push errors. color.remote If set, keywords at the start of the line are highlighted. The keywords are "error", "warning", "hint" and "success", and are matched case-insensitively. May be set to `always`, `false` (or `never`) or `auto` (or `true`). If unset, then the value of `color.ui` is used (`auto` by default). color.remote.<slot> Use customized color for each remote keyword. `<slot>` may be `hint`, `warning`, `success` or `error` which match the corresponding keyword. color.showBranch A boolean to enable/disable color in the output of [git-show-branch[1]](git-show-branch). May be set to `always`, `false` (or `never`) or `auto` (or `true`), in which case colors are used only when the output is to a terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.status A boolean to enable/disable color in the output of [git-status[1]](git-status). May be set to `always`, `false` (or `never`) or `auto` (or `true`), in which case colors are used only when the output is to a terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.status.<slot> Use customized color for status colorization. `<slot>` is one of `header` (the header text of the status message), `added` or `updated` (files which are added but not committed), `changed` (files which are changed but not added in the index), `untracked` (files which are not tracked by Git), `branch` (the current branch), `nobranch` (the color the `no branch` warning is shown in, defaulting to red), `localBranch` or `remoteBranch` (the local and remote branch names, respectively, when branch and tracking information is displayed in the status short-format), or `unmerged` (files which have unmerged changes). color.transport A boolean to enable/disable color when pushes are rejected. May be set to `always`, `false` (or `never`) or `auto` (or `true`), in which case colors are used only when the error output goes to a terminal. If unset, then the value of `color.ui` is used (`auto` by default). color.transport.rejected Use customized color when a push was rejected. color.ui This variable determines the default value for variables such as `color.diff` and `color.grep` that control the use of color per command family. Its scope will expand as more commands learn configuration to set a default for the `--color` option. Set it to `false` or `never` if you prefer Git commands not to use color unless enabled explicitly with some other configuration or the `--color` option. Set it to `always` if you want all output not intended for machine consumption to use color, to `true` or `auto` (this is the default since Git 1.8.4) if you want such output to use color when written to the terminal. column.ui Specify whether supported commands should output in columns. This variable consists of a list of tokens separated by spaces or commas: These options control when the feature should be enabled (defaults to `never`): `always` always show in columns `never` never show in columns `auto` show in columns if the output is to the terminal These options control layout (defaults to `column`). Setting any of these implies `always` if none of `always`, `never`, or `auto` are specified. `column` fill columns before rows `row` fill rows before columns `plain` show in one column Finally, these options can be combined with a layout option (defaults to `nodense`): `dense` make unequal size columns to utilize more space `nodense` make equal size columns column.branch Specify whether to output branch listing in `git branch` in columns. See `column.ui` for details. column.clean Specify the layout when list items in `git clean -i`, which always shows files and directories in columns. See `column.ui` for details. column.status Specify whether to output untracked files in `git status` in columns. See `column.ui` for details. column.tag Specify whether to output tag listing in `git tag` in columns. See `column.ui` for details. commit.cleanup This setting overrides the default of the `--cleanup` option in `git commit`. See [git-commit[1]](git-commit) for details. Changing the default can be useful when you always want to keep lines that begin with comment character `#` in your log message, in which case you would do `git config commit.cleanup whitespace` (note that you will have to remove the help lines that begin with `#` in the commit log template yourself, if you do this). commit.gpgSign A boolean to specify whether all commits should be GPG signed. Use of this option when doing operations such as rebase can result in a large number of commits being signed. It may be convenient to use an agent to avoid typing your GPG passphrase several times. commit.status A boolean to enable/disable inclusion of status information in the commit message template when using an editor to prepare the commit message. Defaults to true. commit.template Specify the pathname of a file to use as the template for new commit messages. commit.verbose A boolean or int to specify the level of verbose with `git commit`. See [git-commit[1]](git-commit). commitGraph.generationVersion Specifies the type of generation number version to use when writing or reading the commit-graph file. If version 1 is specified, then the corrected commit dates will not be written or read. Defaults to 2. commitGraph.maxNewFilters Specifies the default value for the `--max-new-filters` option of `git commit-graph write` (c.f., [git-commit-graph[1]](git-commit-graph)). commitGraph.readChangedPaths If true, then git will use the changed-path Bloom filters in the commit-graph file (if it exists, and they are present). Defaults to true. See [git-commit-graph[1]](git-commit-graph) for more information. credential.helper Specify an external helper to be called when a username or password credential is needed; the helper may consult external storage to avoid prompting the user for the credentials. This is normally the name of a credential helper with possible arguments, but may also be an absolute path with arguments or, if preceded by `!`, shell commands. Note that multiple helpers may be defined. See [gitcredentials[7]](gitcredentials) for details and examples. credential.useHttpPath When acquiring credentials, consider the "path" component of an http or https URL to be important. Defaults to false. See [gitcredentials[7]](gitcredentials) for more information. credential.username If no username is set for a network authentication, use this username by default. See credential.<context>.\* below, and [gitcredentials[7]](gitcredentials). credential.<url>.\* Any of the credential.\* options above can be applied selectively to some credentials. For example "credential.https://example.com.username" would set the default username only for https connections to example.com. See [gitcredentials[7]](gitcredentials) for details on how URLs are matched. credentialCache.ignoreSIGHUP Tell git-credential-cache—​daemon to ignore SIGHUP, instead of quitting. credentialStore.lockTimeoutMS The length of time, in milliseconds, for git-credential-store to retry when trying to lock the credentials file. Value 0 means not to retry at all; -1 means to try indefinitely. Default is 1000 (i.e., retry for 1s). completion.commands This is only used by git-completion.bash to add or remove commands from the list of completed commands. Normally only porcelain commands and a few select others are completed. You can add more commands, separated by space, in this variable. Prefixing the command with `-` will remove it from the existing list. diff.autoRefreshIndex When using `git diff` to compare with work tree files, do not consider stat-only change as changed. Instead, silently run `git update-index --refresh` to update the cached stat information for paths whose contents in the work tree match the contents in the index. This option defaults to true. Note that this affects only `git diff` Porcelain, and not lower level `diff` commands such as `git diff-files`. diff.dirstat A comma separated list of `--dirstat` parameters specifying the default behavior of the `--dirstat` option to [git-diff[1]](git-diff) and friends. The defaults can be overridden on the command line (using `--dirstat=<param1,param2,...>`). The fallback defaults (when not changed by `diff.dirstat`) are `changes,noncumulative,3`. The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `files,10,cumulative`. diff.statGraphWidth Limit the width of the graph part in --stat output. If set, applies to all commands generating --stat output except format-patch. diff.context Generate diffs with <n> lines of context instead of the default of 3. This value is overridden by the -U option. diff.interHunkContext Show the context between diff hunks, up to the specified number of lines, thereby fusing the hunks that are close to each other. This value serves as the default for the `--inter-hunk-context` command line option. diff.external If this config variable is set, diff generation is not performed using the internal diff machinery, but using the given command. Can be overridden with the ‘GIT\_EXTERNAL\_DIFF’ environment variable. The command is called with parameters as described under "git Diffs" in [git[1]](git). Note: if you want to use an external diff program only on a subset of your files, you might want to use [gitattributes[5]](gitattributes) instead. diff.ignoreSubmodules Sets the default value of --ignore-submodules. Note that this affects only `git diff` Porcelain, and not lower level `diff` commands such as `git diff-files`. `git checkout` and `git switch` also honor this setting when reporting uncommitted changes. Setting it to `all` disables the submodule summary normally shown by `git commit` and `git status` when `status.submoduleSummary` is set unless it is overridden by using the --ignore-submodules command-line option. The `git submodule` commands are not affected by this setting. By default this is set to untracked so that any untracked submodules are ignored. diff.mnemonicPrefix If set, `git diff` uses a prefix pair that is different from the standard "a/" and "b/" depending on what is being compared. When this configuration is in effect, reverse diff output also swaps the order of the prefixes: `git diff` compares the (i)ndex and the (w)ork tree; `git diff HEAD` compares a (c)ommit and the (w)ork tree; `git diff --cached` compares a (c)ommit and the (i)ndex; `git diff HEAD:file1 file2` compares an (o)bject and a (w)ork tree entity; `git diff --no-index a b` compares two non-git things (1) and (2). diff.noprefix If set, `git diff` does not show any source or destination prefix. diff.relative If set to `true`, `git diff` does not show changes outside of the directory and show pathnames relative to the current directory. diff.orderFile File indicating how to order files within a diff. See the `-O` option to [git-diff[1]](git-diff) for details. If `diff.orderFile` is a relative pathname, it is treated as relative to the top of the working tree. diff.renameLimit The number of files to consider in the exhaustive portion of copy/rename detection; equivalent to the `git diff` option `-l`. If not set, the default value is currently 1000. This setting has no effect if rename detection is turned off. diff.renames Whether and how Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. If set to "copies" or "copy", Git will detect copies, as well. Defaults to true. Note that this affects only `git diff` Porcelain like [git-diff[1]](git-diff) and [git-log[1]](git-log), and not lower level commands such as [git-diff-files[1]](git-diff-files). diff.suppressBlankEmpty A boolean to inhibit the standard behavior of printing a space before each empty output line. Defaults to false. diff.submodule Specify the format in which differences in submodules are shown. The "short" format just shows the names of the commits at the beginning and end of the range. The "log" format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. The "diff" format shows an inline diff of the changed contents of the submodule. Defaults to "short". diff.wordRegex A POSIX Extended Regular Expression used to determine what is a "word" when performing word-by-word difference calculations. Character sequences that match the regular expression are "words", all other characters are **ignorable** whitespace. diff.<driver>.command The custom diff driver command. See [gitattributes[5]](gitattributes) for details. diff.<driver>.xfuncname The regular expression that the diff driver should use to recognize the hunk header. A built-in pattern may also be used. See [gitattributes[5]](gitattributes) for details. diff.<driver>.binary Set this option to true to make the diff driver treat files as binary. See [gitattributes[5]](gitattributes) for details. diff.<driver>.textconv The command that the diff driver should call to generate the text-converted version of a file. The result of the conversion is used to generate a human-readable diff. See [gitattributes[5]](gitattributes) for details. diff.<driver>.wordRegex The regular expression that the diff driver should use to split words in a line. See [gitattributes[5]](gitattributes) for details. diff.<driver>.cachetextconv Set this option to true to make the diff driver cache the text conversion outputs. See [gitattributes[5]](gitattributes) for details. * araxis * bc * codecompare * deltawalker * diffmerge * diffuse * ecmerge * emerge * examdiff * guiffy * gvimdiff * kdiff3 * kompare * meld * nvimdiff * opendiff * p4merge * smerge * tkdiff * vimdiff * winmerge * xxdiff diff.indentHeuristic Set this option to `false` to disable the default heuristics that shift diff hunk boundaries to make patches easier to read. diff.algorithm Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". diff.wsErrorHighlight Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. The whitespace errors are colored with `color.diff.whitespace`. The command line option `--ws-error-highlight=<kind>` overrides this setting. diff.colorMoved If set to either a valid `<mode>` or a true value, moved lines in a diff are colored differently, for details of valid modes see `--color-moved` in [git-diff[1]](git-diff). If simply set to true the default color mode will be used. When set to false, moved lines are not colored. diff.colorMovedWS When moved lines are colored using e.g. the `diff.colorMoved` setting, this option controls the `<mode>` how spaces are treated for details of valid modes see `--color-moved-ws` in [git-diff[1]](git-diff). diff.tool Controls which diff tool is used by [git-difftool[1]](git-difftool). This variable overrides the value configured in `merge.tool`. The list below shows the valid built-in values. Any other value is treated as a custom diff tool and requires that a corresponding difftool.<tool>.cmd variable is defined. diff.guitool Controls which diff tool is used by [git-difftool[1]](git-difftool) when the -g/--gui flag is specified. This variable overrides the value configured in `merge.guitool`. The list below shows the valid built-in values. Any other value is treated as a custom diff tool and requires that a corresponding difftool.<guitool>.cmd variable is defined. difftool.<tool>.cmd Specify the command to invoke the specified diff tool. The specified command is evaluated in shell with the following variables available: `LOCAL` is set to the name of the temporary file containing the contents of the diff pre-image and `REMOTE` is set to the name of the temporary file containing the contents of the diff post-image. See the `--tool=<tool>` option in [git-difftool[1]](git-difftool) for more details. difftool.<tool>.path Override the path for the given tool. This is useful in case your tool is not in the PATH. difftool.trustExitCode Exit difftool if the invoked diff tool returns a non-zero exit status. See the `--trust-exit-code` option in [git-difftool[1]](git-difftool) for more details. difftool.prompt Prompt before each invocation of the diff tool. extensions.objectFormat Specify the hash algorithm to use. The acceptable values are `sha1` and `sha256`. If not specified, `sha1` is assumed. It is an error to specify this key unless `core.repositoryFormatVersion` is 1. Note that this setting should only be set by [git-init[1]](git-init) or [git-clone[1]](git-clone). Trying to change it after initialization will not work and will produce hard-to-diagnose issues. extensions.worktreeConfig If enabled, then worktrees will load config settings from the `$GIT_DIR/config.worktree` file in addition to the `$GIT_COMMON_DIR/config` file. Note that `$GIT_COMMON_DIR` and `$GIT_DIR` are the same for the main working tree, while other working trees have `$GIT_DIR` equal to `$GIT_COMMON_DIR/worktrees/<id>/`. The settings in the `config.worktree` file will override settings from any other config files. When enabling `extensions.worktreeConfig`, you must be careful to move certain values from the common config file to the main working tree’s `config.worktree` file, if present: * `core.worktree` must be moved from `$GIT_COMMON_DIR/config` to `$GIT_COMMON_DIR/config.worktree`. * If `core.bare` is true, then it must be moved from `$GIT_COMMON_DIR/config` to `$GIT_COMMON_DIR/config.worktree`. It may also be beneficial to adjust the locations of `core.sparseCheckout` and `core.sparseCheckoutCone` depending on your desire for customizable sparse-checkout settings for each worktree. By default, the `git sparse-checkout` builtin enables `extensions.worktreeConfig`, assigns these config values on a per-worktree basis, and uses the `$GIT_DIR/info/sparse-checkout` file to specify the sparsity for each worktree independently. See [git-sparse-checkout[1]](git-sparse-checkout) for more details. For historical reasons, `extensions.worktreeConfig` is respected regardless of the `core.repositoryFormatVersion` setting. fastimport.unpackLimit If the number of objects imported by [git-fast-import[1]](git-fast-import) is below this limit, then the objects will be unpacked into loose object files. However if the number of imported objects equals or exceeds this limit then the pack will be stored as a pack. Storing the pack from a fast-import can make the import operation complete faster, especially on slow filesystems. If not set, the value of `transfer.unpackLimit` is used instead. feature.\* The config settings that start with `feature.` modify the defaults of a group of other config settings. These groups are created by the Git developer community as recommended defaults and are subject to change. In particular, new config options may be added with different defaults. feature.experimental Enable config options that are new to Git, and are being considered for future defaults. Config settings included here may be added or removed with each release, including minor version updates. These settings may have unintended interactions since they are so new. Please enable this setting if you are interested in providing feedback on experimental features. The new default values are: * `fetch.negotiationAlgorithm=skipping` may improve fetch negotiation times by skipping more commits at a time, reducing the number of round trips. * `gc.cruftPacks=true` reduces disk space used by unreachable objects during garbage collection, preventing loose object explosions. feature.manyFiles Enable config options that optimize for repos with many files in the working directory. With many files, commands such as `git status` and `git checkout` may be slow and these new defaults improve performance: * `index.version=4` enables path-prefix compression in the index. * `core.untrackedCache=true` enables the untracked cache. This setting assumes that mtime is working on your machine. fetch.recurseSubmodules This option controls whether `git fetch` (and the underlying fetch in `git pull`) will recursively fetch into populated submodules. This option can be set either to a boolean value or to `on-demand`. Setting it to a boolean changes the behavior of fetch and pull to recurse unconditionally into submodules when set to true or to not recurse at all when set to false. When set to `on-demand`, fetch and pull will only recurse into a populated submodule when its superproject retrieves a commit that updates the submodule’s reference. Defaults to `on-demand`, or to the value of `submodule.recurse` if set. fetch.fsckObjects If it is set to true, git-fetch-pack will check all fetched objects. See `transfer.fsckObjects` for what’s checked. Defaults to false. If not set, the value of `transfer.fsckObjects` is used instead. fetch.fsck.<msg-id> Acts like `fsck.<msg-id>`, but is used by [git-fetch-pack[1]](git-fetch-pack) instead of [git-fsck[1]](git-fsck). See the `fsck.<msg-id>` documentation for details. fetch.fsck.skipList Acts like `fsck.skipList`, but is used by [git-fetch-pack[1]](git-fetch-pack) instead of [git-fsck[1]](git-fsck). See the `fsck.skipList` documentation for details. fetch.unpackLimit If the number of objects fetched over the Git native transfer is below this limit, then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of `transfer.unpackLimit` is used instead. fetch.prune If true, fetch will automatically behave as if the `--prune` option was given on the command line. See also `remote.<name>.prune` and the PRUNING section of [git-fetch[1]](git-fetch). fetch.pruneTags If true, fetch will automatically behave as if the `refs/tags/*:refs/tags/*` refspec was provided when pruning, if not set already. This allows for setting both this option and `fetch.prune` to maintain a 1=1 mapping to upstream refs. See also `remote.<name>.pruneTags` and the PRUNING section of [git-fetch[1]](git-fetch). fetch.output Control how ref update status is printed. Valid values are `full` and `compact`. Default value is `full`. See section OUTPUT in [git-fetch[1]](git-fetch) for detail. fetch.negotiationAlgorithm Control how information about the commits in the local repository is sent when negotiating the contents of the packfile to be sent by the server. Set to "consecutive" to use an algorithm that walks over consecutive commits checking each one. Set to "skipping" to use an algorithm that skips commits in an effort to converge faster, but may result in a larger-than-necessary packfile; or set to "noop" to not send any information at all, which will almost certainly result in a larger-than-necessary packfile, but will skip the negotiation step. Set to "default" to override settings made previously and use the default behaviour. The default is normally "consecutive", but if `feature.experimental` is true, then the default is "skipping". Unknown values will cause `git fetch` to error out. See also the `--negotiate-only` and `--negotiation-tip` options to [git-fetch[1]](git-fetch). fetch.showForcedUpdates Set to false to enable `--no-show-forced-updates` in [git-fetch[1]](git-fetch) and [git-pull[1]](git-pull) commands. Defaults to true. fetch.parallel Specifies the maximal number of fetch operations to be run in parallel at a time (submodules, or remotes when the `--multiple` option of [git-fetch[1]](git-fetch) is in effect). A value of 0 will give some reasonable default. If unset, it defaults to 1. For submodules, this setting can be overridden using the `submodule.fetchJobs` config setting. fetch.writeCommitGraph Set to true to write a commit-graph after every `git fetch` command that downloads a pack-file from a remote. Using the `--split` option, most executions will create a very small commit-graph file on top of the existing commit-graph file(s). Occasionally, these files will merge and the write may take longer. Having an updated commit-graph file helps performance of many Git commands, including `git merge-base`, `git push -f`, and `git log --graph`. Defaults to false. format.attach Enable multipart/mixed attachments as the default for `format-patch`. The value can also be a double quoted string which will enable attachments as the default and set the value as the boundary. See the --attach option in [git-format-patch[1]](git-format-patch). format.from Provides the default value for the `--from` option to format-patch. Accepts a boolean value, or a name and email address. If false, format-patch defaults to `--no-from`, using commit authors directly in the "From:" field of patch mails. If true, format-patch defaults to `--from`, using your committer identity in the "From:" field of patch mails and including a "From:" field in the body of the patch mail if different. If set to a non-boolean value, format-patch uses that value instead of your committer identity. Defaults to false. format.forceInBodyFrom Provides the default value for the `--[no-]force-in-body-from` option to format-patch. Defaults to false. format.numbered A boolean which can enable or disable sequence numbers in patch subjects. It defaults to "auto" which enables it only if there is more than one patch. It can be enabled or disabled for all messages by setting it to "true" or "false". See --numbered option in [git-format-patch[1]](git-format-patch). format.headers Additional email headers to include in a patch to be submitted by mail. See [git-format-patch[1]](git-format-patch). format.to format.cc Additional recipients to include in a patch to be submitted by mail. See the --to and --cc options in [git-format-patch[1]](git-format-patch). format.subjectPrefix The default for format-patch is to output files with the `[PATCH]` subject prefix. Use this variable to change that prefix. format.coverFromDescription The default mode for format-patch to determine which parts of the cover letter will be populated using the branch’s description. See the `--cover-from-description` option in [git-format-patch[1]](git-format-patch). format.signature The default for format-patch is to output a signature containing the Git version number. Use this variable to change that default. Set this variable to the empty string ("") to suppress signature generation. format.signatureFile Works just like format.signature except the contents of the file specified by this variable will be used as the signature. format.suffix The default for format-patch is to output files with the suffix `.patch`. Use this variable to change that suffix (make sure to include the dot if you want it). format.encodeEmailHeaders Encode email headers that have non-ASCII characters with "Q-encoding" (described in RFC 2047) for email transmission. Defaults to true. format.pretty The default pretty format for log/show/whatchanged command, See [git-log[1]](git-log), [git-show[1]](git-show), [git-whatchanged[1]](git-whatchanged). format.thread The default threading style for `git format-patch`. Can be a boolean value, or `shallow` or `deep`. `shallow` threading makes every mail a reply to the head of the series, where the head is chosen from the cover letter, the `--in-reply-to`, and the first patch mail, in this order. `deep` threading makes every mail a reply to the previous one. A true boolean value is the same as `shallow`, and a false value disables threading. format.signOff A boolean value which lets you enable the `-s/--signoff` option of format-patch by default. **Note:** Adding the `Signed-off-by` trailer to a patch should be a conscious act and means that you certify you have the rights to submit this work under the same open source license. Please see the `SubmittingPatches` document for further discussion. format.coverLetter A boolean that controls whether to generate a cover-letter when format-patch is invoked, but in addition can be set to "auto", to generate a cover-letter only when there’s more than one patch. Default is false. format.outputDirectory Set a custom directory to store the resulting files instead of the current working directory. All directory components will be created. format.filenameMaxLength The maximum length of the output filenames generated by the `format-patch` command; defaults to 64. Can be overridden by the `--filename-max-length=<n>` command line option. format.useAutoBase A boolean value which lets you enable the `--base=auto` option of format-patch by default. Can also be set to "whenAble" to allow enabling `--base=auto` if a suitable base is available, but to skip adding base info otherwise without the format dying. format.notes Provides the default value for the `--notes` option to format-patch. Accepts a boolean value, or a ref which specifies where to get notes. If false, format-patch defaults to `--no-notes`. If true, format-patch defaults to `--notes`. If set to a non-boolean value, format-patch defaults to `--notes=<ref>`, where `ref` is the non-boolean value. Defaults to false. If one wishes to use the ref `ref/notes/true`, please use that literal instead. This configuration can be specified multiple times in order to allow multiple notes refs to be included. In that case, it will behave similarly to multiple `--[no-]notes[=]` options passed in. That is, a value of `true` will show the default notes, a value of `<ref>` will also show notes from that notes ref and a value of `false` will negate previous configurations and not show notes. For example, ``` [format] notes = true notes = foo notes = false notes = bar ``` will only show notes from `refs/notes/bar`. filter.<driver>.clean The command which is used to convert the content of a worktree file to a blob upon checkin. See [gitattributes[5]](gitattributes) for details. filter.<driver>.smudge The command which is used to convert the content of a blob object to a worktree file upon checkout. See [gitattributes[5]](gitattributes) for details. fsck.<msg-id> During fsck git may find issues with legacy data which wouldn’t be generated by current versions of git, and which wouldn’t be sent over the wire if `transfer.fsckObjects` was set. This feature is intended to support working with legacy repositories containing such data. Setting `fsck.<msg-id>` will be picked up by [git-fsck[1]](git-fsck), but to accept pushes of such data set `receive.fsck.<msg-id>` instead, or to clone or fetch it set `fetch.fsck.<msg-id>`. The rest of the documentation discusses `fsck.*` for brevity, but the same applies for the corresponding `receive.fsck.*` and `fetch.<msg-id>.*`. variables. Unlike variables like `color.ui` and `core.editor` the `receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>` variables will not fall back on the `fsck.<msg-id>` configuration if they aren’t set. To uniformly configure the same fsck settings in different circumstances all three of them they must all set to the same values. When `fsck.<msg-id>` is set, errors can be switched to warnings and vice versa by configuring the `fsck.<msg-id>` setting where the `<msg-id>` is the fsck message ID and the value is one of `error`, `warn` or `ignore`. For convenience, fsck prefixes the error/warning with the message ID, e.g. "missingEmail: invalid author/committer line - missing email" means that setting `fsck.missingEmail = ignore` will hide that issue. In general, it is better to enumerate existing objects with problems with `fsck.skipList`, instead of listing the kind of breakages these problematic objects share to be ignored, as doing the latter will allow new instances of the same breakages go unnoticed. Setting an unknown `fsck.<msg-id>` value will cause fsck to die, but doing the same for `receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>` will only cause git to warn. See `Fsck Messages` section of [git-fsck[1]](git-fsck) for supported values of `<msg-id>`. fsck.skipList The path to a list of object names (i.e. one unabbreviated SHA-1 per line) that are known to be broken in a non-fatal way and should be ignored. On versions of Git 2.20 and later comments (`#`), empty lines, and any leading and trailing whitespace is ignored. Everything but a SHA-1 per line will error out on older versions. This feature is useful when an established project should be accepted despite early commits containing errors that can be safely ignored such as invalid committer email addresses. Note: corrupt objects cannot be skipped with this setting. Like `fsck.<msg-id>` this variable has corresponding `receive.fsck.skipList` and `fetch.fsck.skipList` variants. Unlike variables like `color.ui` and `core.editor` the `receive.fsck.skipList` and `fetch.fsck.skipList` variables will not fall back on the `fsck.skipList` configuration if they aren’t set. To uniformly configure the same fsck settings in different circumstances all three of them they must all set to the same values. Older versions of Git (before 2.20) documented that the object names list should be sorted. This was never a requirement, the object names could appear in any order, but when reading the list we tracked whether the list was sorted for the purposes of an internal binary search implementation, which could save itself some work with an already sorted list. Unless you had a humongous list there was no reason to go out of your way to pre-sort the list. After Git version 2.20 a hash implementation is used instead, so there’s now no reason to pre-sort the list. fsmonitor.allowRemote By default, the fsmonitor daemon refuses to work against network-mounted repositories. Setting `fsmonitor.allowRemote` to `true` overrides this behavior. Only respected when `core.fsmonitor` is set to `true`. fsmonitor.socketDir This Mac OS-specific option, if set, specifies the directory in which to create the Unix domain socket used for communication between the fsmonitor daemon and various Git commands. The directory must reside on a native Mac OS filesystem. Only respected when `core.fsmonitor` is set to `true`. gc.aggressiveDepth The depth parameter used in the delta compression algorithm used by `git gc --aggressive`. This defaults to 50, which is the default for the `--depth` option when `--aggressive` isn’t in use. See the documentation for the `--depth` option in [git-repack[1]](git-repack) for more details. gc.aggressiveWindow The window size parameter used in the delta compression algorithm used by `git gc --aggressive`. This defaults to 250, which is a much more aggressive window size than the default `--window` of 10. See the documentation for the `--window` option in [git-repack[1]](git-repack) for more details. gc.auto When there are approximately more than this many loose objects in the repository, `git gc --auto` will pack them. Some Porcelain commands use this command to perform a light-weight garbage collection from time to time. The default value is 6700. Setting this to 0 disables not only automatic packing based on the number of loose objects, but any other heuristic `git gc --auto` will otherwise use to determine if there’s work to do, such as `gc.autoPackLimit`. gc.autoPackLimit When there are more than this many packs that are not marked with `*.keep` file in the repository, `git gc --auto` consolidates them into one larger pack. The default value is 50. Setting this to 0 disables it. Setting `gc.auto` to 0 will also disable this. See the `gc.bigPackThreshold` configuration variable below. When in use, it’ll affect how the auto pack limit works. gc.autoDetach Make `git gc --auto` return immediately and run in background if the system supports it. Default is true. gc.bigPackThreshold If non-zero, all packs larger than this limit are kept when `git gc` is run. This is very similar to `--keep-largest-pack` except that all packs that meet the threshold are kept, not just the largest pack. Defaults to zero. Common unit suffixes of `k`, `m`, or `g` are supported. Note that if the number of kept packs is more than gc.autoPackLimit, this configuration variable is ignored, all packs except the base pack will be repacked. After this the number of packs should go below gc.autoPackLimit and gc.bigPackThreshold should be respected again. If the amount of memory estimated for `git repack` to run smoothly is not available and `gc.bigPackThreshold` is not set, the largest pack will also be excluded (this is the equivalent of running `git gc` with `--keep-largest-pack`). gc.writeCommitGraph If true, then gc will rewrite the commit-graph file when [git-gc[1]](git-gc) is run. When using `git gc --auto` the commit-graph will be updated if housekeeping is required. Default is true. See [git-commit-graph[1]](git-commit-graph) for details. gc.logExpiry If the file gc.log exists, then `git gc --auto` will print its content and exit with status zero instead of running unless that file is more than `gc.logExpiry` old. Default is "1.day". See `gc.pruneExpire` for more ways to specify its value. gc.packRefs Running `git pack-refs` in a repository renders it unclonable by Git versions prior to 1.5.1.2 over dumb transports such as HTTP. This variable determines whether `git gc` runs `git pack-refs`. This can be set to `notbare` to enable it within all non-bare repos or it can be set to a boolean value. The default is `true`. gc.cruftPacks Store unreachable objects in a cruft pack (see [git-repack[1]](git-repack)) instead of as loose objects. The default is `false`. gc.pruneExpire When `git gc` is run, it will call `prune --expire 2.weeks.ago` (and `repack --cruft --cruft-expiration 2.weeks.ago` if using cruft packs via `gc.cruftPacks` or `--cruft`). Override the grace period with this config variable. The value "now" may be used to disable this grace period and always prune unreachable objects immediately, or "never" may be used to suppress pruning. This feature helps prevent corruption when `git gc` runs concurrently with another process writing to the repository; see the "NOTES" section of [git-gc[1]](git-gc). gc.worktreePruneExpire When `git gc` is run, it calls `git worktree prune --expire 3.months.ago`. This config variable can be used to set a different grace period. The value "now" may be used to disable the grace period and prune `$GIT_DIR/worktrees` immediately, or "never" may be used to suppress pruning. gc.reflogExpire gc.<pattern>.reflogExpire `git reflog expire` removes reflog entries older than this time; defaults to 90 days. The value "now" expires all entries immediately, and "never" suppresses expiration altogether. With "<pattern>" (e.g. "refs/stash") in the middle the setting applies only to the refs that match the <pattern>. gc.reflogExpireUnreachable gc.<pattern>.reflogExpireUnreachable `git reflog expire` removes reflog entries older than this time and are not reachable from the current tip; defaults to 30 days. The value "now" expires all entries immediately, and "never" suppresses expiration altogether. With "<pattern>" (e.g. "refs/stash") in the middle, the setting applies only to the refs that match the <pattern>. These types of entries are generally created as a result of using `git commit --amend` or `git rebase` and are the commits prior to the amend or rebase occurring. Since these changes are not part of the current project most users will want to expire them sooner, which is why the default is more aggressive than `gc.reflogExpire`. gc.rerereResolved Records of conflicted merge you resolved earlier are kept for this many days when `git rerere gc` is run. You can also use more human-readable "1.month.ago", etc. The default is 60 days. See [git-rerere[1]](git-rerere). gc.rerereUnresolved Records of conflicted merge you have not resolved are kept for this many days when `git rerere gc` is run. You can also use more human-readable "1.month.ago", etc. The default is 15 days. See [git-rerere[1]](git-rerere). gitcvs.commitMsgAnnotation Append this string to each commit message. Set to empty string to disable this feature. Defaults to "via git-CVS emulator". gitcvs.enabled Whether the CVS server interface is enabled for this repository. See [git-cvsserver[1]](git-cvsserver). gitcvs.logFile Path to a log file where the CVS server interface well…​ logs various stuff. See [git-cvsserver[1]](git-cvsserver). gitcvs.usecrlfattr If true, the server will look up the end-of-line conversion attributes for files to determine the `-k` modes to use. If the attributes force Git to treat a file as text, the `-k` mode will be left blank so CVS clients will treat it as text. If they suppress text conversion, the file will be set with `-kb` mode, which suppresses any newline munging the client might otherwise do. If the attributes do not allow the file type to be determined, then `gitcvs.allBinary` is used. See [gitattributes[5]](gitattributes). gitcvs.allBinary This is used if `gitcvs.usecrlfattr` does not resolve the correct `-kb` mode to use. If true, all unresolved files are sent to the client in mode `-kb`. This causes the client to treat them as binary files, which suppresses any newline munging it otherwise might do. Alternatively, if it is set to "guess", then the contents of the file are examined to decide if it is binary, similar to `core.autocrlf`. gitcvs.dbName Database used by git-cvsserver to cache revision information derived from the Git repository. The exact meaning depends on the used database driver, for SQLite (which is the default driver) this is a filename. Supports variable substitution (see [git-cvsserver[1]](git-cvsserver) for details). May not contain semicolons (`;`). Default: `%Ggitcvs.%m.sqlite` gitcvs.dbDriver Used Perl DBI driver. You can specify any available driver for this here, but it might not work. git-cvsserver is tested with `DBD::SQLite`, reported to work with `DBD::Pg`, and reported **not** to work with `DBD::mysql`. Experimental feature. May not contain double colons (`:`). Default: `SQLite`. See [git-cvsserver[1]](git-cvsserver). gitcvs.dbUser, gitcvs.dbPass Database user and password. Only useful if setting `gitcvs.dbDriver`, since SQLite has no concept of database users and/or passwords. `gitcvs.dbUser` supports variable substitution (see [git-cvsserver[1]](git-cvsserver) for details). gitcvs.dbTableNamePrefix Database table name prefix. Prepended to the names of any database tables used, allowing a single database to be used for several repositories. Supports variable substitution (see [git-cvsserver[1]](git-cvsserver) for details). Any non-alphabetic characters will be replaced with underscores. All gitcvs variables except for `gitcvs.usecrlfattr` and `gitcvs.allBinary` can also be specified as `gitcvs.<access_method>.<varname>` (where `access_method` is one of "ext" and "pserver") to make them apply only for the given access method. gitweb.category gitweb.description gitweb.owner gitweb.url See [gitweb[1]](gitweb) for description. gitweb.avatar gitweb.blame gitweb.grep gitweb.highlight gitweb.patches gitweb.pickaxe gitweb.remote\_heads gitweb.showSizes gitweb.snapshot See [gitweb.conf[5]](gitweb.conf) for description. grep.lineNumber If set to true, enable `-n` option by default. grep.column If set to true, enable the `--column` option by default. grep.patternType Set the default matching behavior. Using a value of `basic`, `extended`, `fixed`, or `perl` will enable the `--basic-regexp`, `--extended-regexp`, `--fixed-strings`, or `--perl-regexp` option accordingly, while the value `default` will use the `grep.extendedRegexp` option to choose between `basic` and `extended`. grep.extendedRegexp If set to true, enable `--extended-regexp` option by default. This option is ignored when the `grep.patternType` option is set to a value other than `default`. grep.threads Number of grep worker threads to use. If unset (or set to 0), Git will use as many threads as the number of logical cores available. grep.fullName If set to true, enable `--full-name` option by default. grep.fallbackToNoIndex If set to true, fall back to git grep --no-index if git grep is executed outside of a git repository. Defaults to false. gpg.program Use this custom program instead of "`gpg`" found on `$PATH` when making or verifying a PGP signature. The program must support the same command-line interface as GPG, namely, to verify a detached signature, "`gpg --verify $signature - <$file`" is run, and the program is expected to signal a good signature by exiting with code 0, and to generate an ASCII-armored detached signature, the standard input of "`gpg -bsau $key`" is fed with the contents to be signed, and the program is expected to send the result to its standard output. gpg.format Specifies which key format to use when signing with `--gpg-sign`. Default is "openpgp". Other possible values are "x509", "ssh". gpg.<format>.program Use this to customize the program used for the signing format you chose. (see `gpg.program` and `gpg.format`) `gpg.program` can still be used as a legacy synonym for `gpg.openpgp.program`. The default value for `gpg.x509.program` is "gpgsm" and `gpg.ssh.program` is "ssh-keygen". gpg.minTrustLevel Specifies a minimum trust level for signature verification. If this option is unset, then signature verification for merge operations require a key with at least `marginal` trust. Other operations that perform signature verification require a key with at least `undefined` trust. Setting this option overrides the required trust-level for all operations. Supported values, in increasing order of significance: * `undefined` * `never` * `marginal` * `fully` * `ultimate` gpg.ssh.defaultKeyCommand This command that will be run when user.signingkey is not set and a ssh signature is requested. On successful exit a valid ssh public key prefixed with `key::` is expected in the first line of its output. This allows for a script doing a dynamic lookup of the correct public key when it is impractical to statically configure `user.signingKey`. For example when keys or SSH Certificates are rotated frequently or selection of the right key depends on external factors unknown to git. gpg.ssh.allowedSignersFile A file containing ssh public keys which you are willing to trust. The file consists of one or more lines of principals followed by an ssh public key. e.g.: `[email protected],[email protected] ssh-rsa AAAAX1...` See ssh-keygen(1) "ALLOWED SIGNERS" for details. The principal is only used to identify the key and is available when verifying a signature. SSH has no concept of trust levels like gpg does. To be able to differentiate between valid signatures and trusted signatures the trust level of a signature verification is set to `fully` when the public key is present in the allowedSignersFile. Otherwise the trust level is `undefined` and git verify-commit/tag will fail. This file can be set to a location outside of the repository and every developer maintains their own trust store. A central repository server could generate this file automatically from ssh keys with push access to verify the code against. In a corporate setting this file is probably generated at a global location from automation that already handles developer ssh keys. A repository that only allows signed commits can store the file in the repository itself using a path relative to the top-level of the working tree. This way only committers with an already valid key can add or change keys in the keyring. Since OpensSSH 8.8 this file allows specifying a key lifetime using valid-after & valid-before options. Git will mark signatures as valid if the signing key was valid at the time of the signature’s creation. This allows users to change a signing key without invalidating all previously made signatures. Using a SSH CA key with the cert-authority option (see ssh-keygen(1) "CERTIFICATES") is also valid. gpg.ssh.revocationFile Either a SSH KRL or a list of revoked public keys (without the principal prefix). See ssh-keygen(1) for details. If a public key is found in this file then it will always be treated as having trust level "never" and signatures will show as invalid. gui.commitMsgWidth Defines how wide the commit message window is in the [git-gui[1]](git-gui). "75" is the default. gui.diffContext Specifies how many context lines should be used in calls to diff made by the [git-gui[1]](git-gui). The default is "5". gui.displayUntracked Determines if [git-gui[1]](git-gui) shows untracked files in the file list. The default is "true". gui.encoding Specifies the default character encoding to use for displaying of file contents in [git-gui[1]](git-gui) and [gitk[1]](gitk). It can be overridden by setting the `encoding` attribute for relevant files (see [gitattributes[5]](gitattributes)). If this option is not set, the tools default to the locale encoding. gui.matchTrackingBranch Determines if new branches created with [git-gui[1]](git-gui) should default to tracking remote branches with matching names or not. Default: "false". gui.newBranchTemplate Is used as suggested name when creating new branches using the [git-gui[1]](git-gui). gui.pruneDuringFetch "true" if [git-gui[1]](git-gui) should prune remote-tracking branches when performing a fetch. The default value is "false". gui.trustmtime Determines if [git-gui[1]](git-gui) should trust the file modification timestamp or not. By default the timestamps are not trusted. gui.spellingDictionary Specifies the dictionary used for spell checking commit messages in the [git-gui[1]](git-gui). When set to "none" spell checking is turned off. gui.fastCopyBlame If true, `git gui blame` uses `-C` instead of `-C -C` for original location detection. It makes blame significantly faster on huge repositories at the expense of less thorough copy detection. gui.copyBlameThreshold Specifies the threshold to use in `git gui blame` original location detection, measured in alphanumeric characters. See the [git-blame[1]](git-blame) manual for more information on copy detection. gui.blamehistoryctx Specifies the radius of history context in days to show in [gitk[1]](gitk) for the selected commit, when the `Show History Context` menu item is invoked from `git gui blame`. If this variable is set to zero, the whole history is shown. guitool.<name>.cmd Specifies the shell command line to execute when the corresponding item of the [git-gui[1]](git-gui) `Tools` menu is invoked. This option is mandatory for every tool. The command is executed from the root of the working directory, and in the environment it receives the name of the tool as `GIT_GUITOOL`, the name of the currently selected file as `FILENAME`, and the name of the current branch as `CUR_BRANCH` (if the head is detached, `CUR_BRANCH` is empty). guitool.<name>.needsFile Run the tool only if a diff is selected in the GUI. It guarantees that `FILENAME` is not empty. guitool.<name>.noConsole Run the command silently, without creating a window to display its output. guitool.<name>.noRescan Don’t rescan the working directory for changes after the tool finishes execution. guitool.<name>.confirm Show a confirmation dialog before actually running the tool. guitool.<name>.argPrompt Request a string argument from the user, and pass it to the tool through the `ARGS` environment variable. Since requesting an argument implies confirmation, the `confirm` option has no effect if this is enabled. If the option is set to `true`, `yes`, or `1`, the dialog uses a built-in generic prompt; otherwise the exact value of the variable is used. guitool.<name>.revPrompt Request a single valid revision from the user, and set the `REVISION` environment variable. In other aspects this option is similar to `argPrompt`, and can be used together with it. guitool.<name>.revUnmerged Show only unmerged branches in the `revPrompt` subdialog. This is useful for tools similar to merge or rebase, but not for things like checkout or reset. guitool.<name>.title Specifies the title to use for the prompt dialog. The default is the tool name. guitool.<name>.prompt Specifies the general prompt string to display at the top of the dialog, before subsections for `argPrompt` and `revPrompt`. The default value includes the actual command. help.browser Specify the browser that will be used to display help in the `web` format. See [git-help[1]](git-help). help.format Override the default help format used by [git-help[1]](git-help). Values `man`, `info`, `web` and `html` are supported. `man` is the default. `web` and `html` are the same. help.autoCorrect If git detects typos and can identify exactly one valid command similar to the error, git will try to suggest the correct command or even run the suggestion automatically. Possible config values are: * 0 (default): show the suggested command. * positive number: run the suggested command after specified deciseconds (0.1 sec). * "immediate": run the suggested command immediately. * "prompt": show the suggestion and prompt for confirmation to run the command. * "never": don’t run or show any suggested command. help.htmlPath Specify the path where the HTML documentation resides. File system paths and URLs are supported. HTML pages will be prefixed with this path when help is displayed in the `web` format. This defaults to the documentation path of your Git installation. http.proxy Override the HTTP proxy, normally configured using the `http_proxy`, `https_proxy`, and `all_proxy` environment variables (see `curl(1)`). In addition to the syntax understood by curl, it is possible to specify a proxy string with a user name but no password, in which case git will attempt to acquire one in the same way it does for other credentials. See [gitcredentials[7]](gitcredentials) for more information. The syntax thus is `[protocol://][user[:password]@]proxyhost[:port]`. This can be overridden on a per-remote basis; see remote.<name>.proxy http.proxyAuthMethod Set the method with which to authenticate against the HTTP proxy. This only takes effect if the configured proxy string contains a user name part (i.e. is of the form `user@host` or `user@host:port`). This can be overridden on a per-remote basis; see `remote.<name>.proxyAuthMethod`. Both can be overridden by the `GIT_HTTP_PROXY_AUTHMETHOD` environment variable. Possible values are: * `anyauth` - Automatically pick a suitable authentication method. It is assumed that the proxy answers an unauthenticated request with a 407 status code and one or more Proxy-authenticate headers with supported authentication methods. This is the default. * `basic` - HTTP Basic authentication * `digest` - HTTP Digest authentication; this prevents the password from being transmitted to the proxy in clear text * `negotiate` - GSS-Negotiate authentication (compare the --negotiate option of `curl(1)`) * `ntlm` - NTLM authentication (compare the --ntlm option of `curl(1)`) http.proxySSLCert The pathname of a file that stores a client certificate to use to authenticate with an HTTPS proxy. Can be overridden by the `GIT_PROXY_SSL_CERT` environment variable. http.proxySSLKey The pathname of a file that stores a private key to use to authenticate with an HTTPS proxy. Can be overridden by the `GIT_PROXY_SSL_KEY` environment variable. http.proxySSLCertPasswordProtected Enable Git’s password prompt for the proxy SSL certificate. Otherwise OpenSSL will prompt the user, possibly many times, if the certificate or private key is encrypted. Can be overridden by the `GIT_PROXY_SSL_CERT_PASSWORD_PROTECTED` environment variable. http.proxySSLCAInfo Pathname to the file containing the certificate bundle that should be used to verify the proxy with when using an HTTPS proxy. Can be overridden by the `GIT_PROXY_SSL_CAINFO` environment variable. http.emptyAuth Attempt authentication without seeking a username or password. This can be used to attempt GSS-Negotiate authentication without specifying a username in the URL, as libcurl normally requires a username for authentication. http.delegation Control GSSAPI credential delegation. The delegation is disabled by default in libcurl since version 7.21.7. Set parameter to tell the server what it is allowed to delegate when it comes to user credentials. Used with GSS/kerberos. Possible values are: * `none` - Don’t allow any delegation. * `policy` - Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket, which is a matter of realm policy. * `always` - Unconditionally allow the server to delegate. http.extraHeader Pass an additional HTTP header when communicating with a server. If more than one such entry exists, all of them are added as extra headers. To allow overriding the settings inherited from the system config, an empty value will reset the extra headers to the empty list. http.cookieFile The pathname of a file containing previously stored cookie lines, which should be used in the Git http session, if they match the server. The file format of the file to read cookies from should be plain HTTP headers or the Netscape/Mozilla cookie file format (see `curl(1)`). NOTE that the file specified with http.cookieFile is used only as input unless http.saveCookies is set. http.saveCookies If set, store cookies received during requests to the file specified by http.cookieFile. Has no effect if http.cookieFile is unset. http.version Use the specified HTTP protocol version when communicating with a server. If you want to force the default. The available and default version depend on libcurl. Currently the possible values of this option are: * HTTP/2 * HTTP/1.1 http.curloptResolve Hostname resolution information that will be used first by libcurl when sending HTTP requests. This information should be in one of the following formats: * [+]HOST:PORT:ADDRESS[,ADDRESS] * -HOST:PORT The first format redirects all requests to the given `HOST:PORT` to the provided `ADDRESS`(s). The second format clears all previous config values for that `HOST:PORT` combination. To allow easy overriding of all the settings inherited from the system config, an empty value will reset all resolution information to the empty list. http.sslVersion The SSL version to use when negotiating an SSL connection, if you want to force the default. The available and default version depend on whether libcurl was built against NSS or OpenSSL and the particular configuration of the crypto library in use. Internally this sets the `CURLOPT_SSL_VERSION` option; see the libcurl documentation for more details on the format of this option and for the ssl version supported. Currently the possible values of this option are: * sslv2 * sslv3 * tlsv1 * tlsv1.0 * tlsv1.1 * tlsv1.2 * tlsv1.3 Can be overridden by the `GIT_SSL_VERSION` environment variable. To force git to use libcurl’s default ssl version and ignore any explicit http.sslversion option, set `GIT_SSL_VERSION` to the empty string. http.sslCipherList A list of SSL ciphers to use when negotiating an SSL connection. The available ciphers depend on whether libcurl was built against NSS or OpenSSL and the particular configuration of the crypto library in use. Internally this sets the `CURLOPT_SSL_CIPHER_LIST` option; see the libcurl documentation for more details on the format of this list. Can be overridden by the `GIT_SSL_CIPHER_LIST` environment variable. To force git to use libcurl’s default cipher list and ignore any explicit http.sslCipherList option, set `GIT_SSL_CIPHER_LIST` to the empty string. http.sslVerify Whether to verify the SSL certificate when fetching or pushing over HTTPS. Defaults to true. Can be overridden by the `GIT_SSL_NO_VERIFY` environment variable. http.sslCert File containing the SSL certificate when fetching or pushing over HTTPS. Can be overridden by the `GIT_SSL_CERT` environment variable. http.sslKey File containing the SSL private key when fetching or pushing over HTTPS. Can be overridden by the `GIT_SSL_KEY` environment variable. http.sslCertPasswordProtected Enable Git’s password prompt for the SSL certificate. Otherwise OpenSSL will prompt the user, possibly many times, if the certificate or private key is encrypted. Can be overridden by the `GIT_SSL_CERT_PASSWORD_PROTECTED` environment variable. http.sslCAInfo File containing the certificates to verify the peer with when fetching or pushing over HTTPS. Can be overridden by the `GIT_SSL_CAINFO` environment variable. http.sslCAPath Path containing files with the CA certificates to verify the peer with when fetching or pushing over HTTPS. Can be overridden by the `GIT_SSL_CAPATH` environment variable. http.sslBackend Name of the SSL backend to use (e.g. "openssl" or "schannel"). This option is ignored if cURL lacks support for choosing the SSL backend at runtime. http.schannelCheckRevoke Used to enforce or disable certificate revocation checks in cURL when http.sslBackend is set to "schannel". Defaults to `true` if unset. Only necessary to disable this if Git consistently errors and the message is about checking the revocation status of a certificate. This option is ignored if cURL lacks support for setting the relevant SSL option at runtime. http.schannelUseSSLCAInfo As of cURL v7.60.0, the Secure Channel backend can use the certificate bundle provided via `http.sslCAInfo`, but that would override the Windows Certificate Store. Since this is not desirable by default, Git will tell cURL not to use that bundle by default when the `schannel` backend was configured via `http.sslBackend`, unless `http.schannelUseSSLCAInfo` overrides this behavior. http.pinnedPubkey Public key of the https service. It may either be the filename of a PEM or DER encoded public key file or a string starting with `sha256//` followed by the base64 encoded sha256 hash of the public key. See also libcurl `CURLOPT_PINNEDPUBLICKEY`. git will exit with an error if this option is set but not supported by cURL. http.sslTry Attempt to use AUTH SSL/TLS and encrypted data transfers when connecting via regular FTP protocol. This might be needed if the FTP server requires it for security reasons or you wish to connect securely whenever remote FTP server supports it. Default is false since it might trigger certificate verification errors on misconfigured servers. http.maxRequests How many HTTP requests to launch in parallel. Can be overridden by the `GIT_HTTP_MAX_REQUESTS` environment variable. Default is 5. http.minSessions The number of curl sessions (counted across slots) to be kept across requests. They will not be ended with curl\_easy\_cleanup() until http\_cleanup() is invoked. If USE\_CURL\_MULTI is not defined, this value will be capped at 1. Defaults to 1. http.postBuffer Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests. Note that raising this limit is only effective for disabling chunked transfer encoding and therefore should be used only where the remote server or a proxy only supports HTTP/1.0 or is noncompliant with the HTTP standard. Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes. http.lowSpeedLimit, http.lowSpeedTime If the HTTP transfer speed is less than `http.lowSpeedLimit` for longer than `http.lowSpeedTime` seconds, the transfer is aborted. Can be overridden by the `GIT_HTTP_LOW_SPEED_LIMIT` and `GIT_HTTP_LOW_SPEED_TIME` environment variables. http.noEPSV A boolean which disables using of EPSV ftp command by curl. This can helpful with some "poor" ftp servers which don’t support EPSV mode. Can be overridden by the `GIT_CURL_FTP_NO_EPSV` environment variable. Default is false (curl will use EPSV). http.userAgent The HTTP USER\_AGENT string presented to an HTTP server. The default value represents the version of the client Git such as git/1.7.1. This option allows you to override this value to a more common value such as Mozilla/4.0. This may be necessary, for instance, if connecting through a firewall that restricts HTTP connections to a set of common USER\_AGENT strings (but not including those like git/1.7.1). Can be overridden by the `GIT_HTTP_USER_AGENT` environment variable. http.followRedirects Whether git should follow HTTP redirects. If set to `true`, git will transparently follow any redirect issued by a server it encounters. If set to `false`, git will treat all redirects as errors. If set to `initial`, git will follow redirects only for the initial request to a remote, but not for subsequent follow-up HTTP requests. Since git uses the redirected URL as the base for the follow-up requests, this is generally sufficient. The default is `initial`. http.<url>.\* Any of the http.\* options above can be applied selectively to some URLs. For a config key to match a URL, each element of the config key is compared to that of the URL, in the following order: 1. Scheme (e.g., `https` in `https://example.com/`). This field must match exactly between the config key and the URL. 2. Host/domain name (e.g., `example.com` in `https://example.com/`). This field must match between the config key and the URL. It is possible to specify a `*` as part of the host name to match all subdomains at this level. `https://*.example.com/` for example would match `https://foo.example.com/`, but not `https://foo.bar.example.com/`. 3. Port number (e.g., `8080` in `http://example.com:8080/`). This field must match exactly between the config key and the URL. Omitted port numbers are automatically converted to the correct default for the scheme before matching. 4. Path (e.g., `repo.git` in `https://example.com/repo.git`). The path field of the config key must match the path field of the URL either exactly or as a prefix of slash-delimited path elements. This means a config key with path `foo/` matches URL path `foo/bar`. A prefix can only match on a slash (`/`) boundary. Longer matches take precedence (so a config key with path `foo/bar` is a better match to URL path `foo/bar` than a config key with just path `foo/`). 5. User name (e.g., `user` in `https://[email protected]/repo.git`). If the config key has a user name it must match the user name in the URL exactly. If the config key does not have a user name, that config key will match a URL with any user name (including none), but at a lower precedence than a config key with a user name. The list above is ordered by decreasing precedence; a URL that matches a config key’s path is preferred to one that matches its user name. For example, if the URL is `https://[email protected]/foo/bar` a config key match of `https://example.com/foo` will be preferred over a config key match of `https://[email protected]`. All URLs are normalized before attempting any matching (the password part, if embedded in the URL, is always ignored for matching purposes) so that equivalent URLs that are simply spelled differently will match properly. Environment variable settings always override any matches. The URLs that are matched against are those given directly to Git commands. This means any URLs visited as a result of a redirection do not participate in matching. i18n.commitEncoding Character encoding the commit messages are stored in; Git itself does not care per se, but this information is necessary e.g. when importing commits from emails or in the gitk graphical history browser (and possibly at other places in the future or in other porcelains). See e.g. [git-mailinfo[1]](git-mailinfo). Defaults to `utf-8`. i18n.logOutputEncoding Character encoding the commit messages are converted to when running `git log` and friends. imap.folder The folder to drop the mails into, which is typically the Drafts folder. For example: "INBOX.Drafts", "INBOX/Drafts" or "[Gmail]/Drafts". Required. imap.tunnel Command used to setup a tunnel to the IMAP server through which commands will be piped instead of using a direct network connection to the server. Required when imap.host is not set. imap.host A URL identifying the server. Use an `imap://` prefix for non-secure connections and an `imaps://` prefix for secure connections. Ignored when imap.tunnel is set, but required otherwise. imap.user The username to use when logging in to the server. imap.pass The password to use when logging in to the server. imap.port An integer port number to connect to on the server. Defaults to 143 for imap:// hosts and 993 for imaps:// hosts. Ignored when imap.tunnel is set. imap.sslverify A boolean to enable/disable verification of the server certificate used by the SSL/TLS connection. Default is `true`. Ignored when imap.tunnel is set. imap.preformattedHTML A boolean to enable/disable the use of html encoding when sending a patch. An html encoded patch will be bracketed with <pre> and have a content type of text/html. Ironically, enabling this option causes Thunderbird to send the patch as a plain/text, format=fixed email. Default is `false`. imap.authMethod Specify authenticate method for authentication with IMAP server. If Git was built with the NO\_CURL option, or if your curl version is older than 7.34.0, or if you’re running git-imap-send with the `--no-curl` option, the only supported method is `CRAM-MD5`. If this is not set then `git imap-send` uses the basic IMAP plaintext LOGIN command. include.path includeIf.<condition>.path Special variables to include other configuration files. See the "CONFIGURATION FILE" section in the main [git-config[1]](git-config) documentation, specifically the "Includes" and "Conditional Includes" subsections. index.recordEndOfIndexEntries Specifies whether the index file should include an "End Of Index Entry" section. This reduces index load time on multiprocessor machines but produces a message "ignoring EOIE extension" when reading the index using Git versions before 2.20. Defaults to `true` if index.threads has been explicitly enabled, `false` otherwise. index.recordOffsetTable Specifies whether the index file should include an "Index Entry Offset Table" section. This reduces index load time on multiprocessor machines but produces a message "ignoring IEOT extension" when reading the index using Git versions before 2.20. Defaults to `true` if index.threads has been explicitly enabled, `false` otherwise. index.sparse When enabled, write the index using sparse-directory entries. This has no effect unless `core.sparseCheckout` and `core.sparseCheckoutCone` are both enabled. Defaults to `false`. index.threads Specifies the number of threads to spawn when loading the index. This is meant to reduce index load time on multiprocessor machines. Specifying 0 or `true` will cause Git to auto-detect the number of CPU’s and set the number of threads accordingly. Specifying 1 or `false` will disable multithreading. Defaults to `true`. index.version Specify the version with which new index files should be initialized. This does not affect existing repositories. If `feature.manyFiles` is enabled, then the default is 4. init.templateDir Specify the directory from which templates will be copied. (See the "TEMPLATE DIRECTORY" section of [git-init[1]](git-init).) init.defaultBranch Allows overriding the default branch name e.g. when initializing a new repository. instaweb.browser Specify the program that will be used to browse your working repository in gitweb. See [git-instaweb[1]](git-instaweb). instaweb.httpd The HTTP daemon command-line to start gitweb on your working repository. See [git-instaweb[1]](git-instaweb). instaweb.local If true the web server started by [git-instaweb[1]](git-instaweb) will be bound to the local IP (127.0.0.1). instaweb.modulePath The default module path for [git-instaweb[1]](git-instaweb) to use instead of /usr/lib/apache2/modules. Only used if httpd is Apache. instaweb.port The port number to bind the gitweb httpd to. See [git-instaweb[1]](git-instaweb). interactive.singleKey In interactive commands, allow the user to provide one-letter input with a single key (i.e., without hitting enter). Currently this is used by the `--patch` mode of [git-add[1]](git-add), [git-checkout[1]](git-checkout), [git-restore[1]](git-restore), [git-commit[1]](git-commit), [git-reset[1]](git-reset), and [git-stash[1]](git-stash). Note that this setting is silently ignored if portable keystroke input is not available; requires the Perl module Term::ReadKey. interactive.diffFilter When an interactive command (such as `git add --patch`) shows a colorized diff, git will pipe the diff through the shell command defined by this configuration variable. The command may mark up the diff further for human consumption, provided that it retains a one-to-one correspondence with the lines in the original diff. Defaults to disabled (no filtering). log.abbrevCommit If true, makes [git-log[1]](git-log), [git-show[1]](git-show), and [git-whatchanged[1]](git-whatchanged) assume `--abbrev-commit`. You may override this option with `--no-abbrev-commit`. log.date Set the default date-time mode for the `log` command. Setting a value for log.date is similar to using `git log`'s `--date` option. See [git-log[1]](git-log) for details. If the format is set to "auto:foo" and the pager is in use, format "foo" will be the used for the date format. Otherwise "default" will be used. log.decorate Print out the ref names of any commits that are shown by the log command. If `short` is specified, the ref name prefixes `refs/heads/`, `refs/tags/` and `refs/remotes/` will not be printed. If `full` is specified, the full ref name (including prefix) will be printed. If `auto` is specified, then if the output is going to a terminal, the ref names are shown as if `short` were given, otherwise no ref names are shown. This is the same as the `--decorate` option of the `git log`. log.initialDecorationSet By default, `git log` only shows decorations for certain known ref namespaces. If `all` is specified, then show all refs as decorations. log.excludeDecoration Exclude the specified patterns from the log decorations. This is similar to the `--decorate-refs-exclude` command-line option, but the config option can be overridden by the `--decorate-refs` option. log.diffMerges Set diff format to be used when `--diff-merges=on` is specified, see `--diff-merges` in [git-log[1]](git-log) for details. Defaults to `separate`. log.follow If `true`, `git log` will act as if the `--follow` option was used when a single <path> is given. This has the same limitations as `--follow`, i.e. it cannot be used to follow multiple files and does not work well on non-linear history. log.graphColors A list of colors, separated by commas, that can be used to draw history lines in `git log --graph`. log.showRoot If true, the initial commit will be shown as a big creation event. This is equivalent to a diff against an empty tree. Tools like [git-log[1]](git-log) or [git-whatchanged[1]](git-whatchanged), which normally hide the root commit will now show it. True by default. log.showSignature If true, makes [git-log[1]](git-log), [git-show[1]](git-show), and [git-whatchanged[1]](git-whatchanged) assume `--show-signature`. log.mailmap If true, makes [git-log[1]](git-log), [git-show[1]](git-show), and [git-whatchanged[1]](git-whatchanged) assume `--use-mailmap`, otherwise assume `--no-use-mailmap`. True by default. lsrefs.unborn May be "advertise" (the default), "allow", or "ignore". If "advertise", the server will respond to the client sending "unborn" (as described in [gitprotocol-v2[5]](gitprotocol-v2)) and will advertise support for this feature during the protocol v2 capability advertisement. "allow" is the same as "advertise" except that the server will not advertise support for this feature; this is useful for load-balanced servers that cannot be updated atomically (for example), since the administrator could configure "allow", then after a delay, configure "advertise". mailinfo.scissors If true, makes [git-mailinfo[1]](git-mailinfo) (and therefore [git-am[1]](git-am)) act by default as if the --scissors option was provided on the command-line. When active, this features removes everything from the message body before a scissors line (i.e. consisting mainly of ">8", "8<" and "-"). mailmap.file The location of an augmenting mailmap file. The default mailmap, located in the root of the repository, is loaded first, then the mailmap file pointed to by this variable. The location of the mailmap file may be in a repository subdirectory, or somewhere outside of the repository itself. See [git-shortlog[1]](git-shortlog) and [git-blame[1]](git-blame). mailmap.blob Like `mailmap.file`, but consider the value as a reference to a blob in the repository. If both `mailmap.file` and `mailmap.blob` are given, both are parsed, with entries from `mailmap.file` taking precedence. In a bare repository, this defaults to `HEAD:.mailmap`. In a non-bare repository, it defaults to empty. maintenance.auto This boolean config option controls whether some commands run `git maintenance run --auto` after doing their normal work. Defaults to true. maintenance.strategy This string config option provides a way to specify one of a few recommended schedules for background maintenance. This only affects which tasks are run during `git maintenance run --schedule=X` commands, provided no `--task=<task>` arguments are provided. Further, if a `maintenance.<task>.schedule` config value is set, then that value is used instead of the one provided by `maintenance.strategy`. The possible strategy strings are: * `none`: This default setting implies no task are run at any schedule. * `incremental`: This setting optimizes for performing small maintenance activities that do not delete any data. This does not schedule the `gc` task, but runs the `prefetch` and `commit-graph` tasks hourly, the `loose-objects` and `incremental-repack` tasks daily, and the `pack-refs` task weekly. maintenance.<task>.enabled This boolean config option controls whether the maintenance task with name `<task>` is run when no `--task` option is specified to `git maintenance run`. These config values are ignored if a `--task` option exists. By default, only `maintenance.gc.enabled` is true. maintenance.<task>.schedule This config option controls whether or not the given `<task>` runs during a `git maintenance run --schedule=<frequency>` command. The value must be one of "hourly", "daily", or "weekly". maintenance.commit-graph.auto This integer config option controls how often the `commit-graph` task should be run as part of `git maintenance run --auto`. If zero, then the `commit-graph` task will not run with the `--auto` option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of reachable commits that are not in the commit-graph file is at least the value of `maintenance.commit-graph.auto`. The default value is 100. maintenance.loose-objects.auto This integer config option controls how often the `loose-objects` task should be run as part of `git maintenance run --auto`. If zero, then the `loose-objects` task will not run with the `--auto` option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of loose objects is at least the value of `maintenance.loose-objects.auto`. The default value is 100. maintenance.incremental-repack.auto This integer config option controls how often the `incremental-repack` task should be run as part of `git maintenance run --auto`. If zero, then the `incremental-repack` task will not run with the `--auto` option. A negative value will force the task to run every time. Otherwise, a positive value implies the command should run when the number of pack-files not in the multi-pack-index is at least the value of `maintenance.incremental-repack.auto`. The default value is 10. man.viewer Specify the programs that may be used to display help in the `man` format. See [git-help[1]](git-help). man.<tool>.cmd Specify the command to invoke the specified man viewer. The specified command is evaluated in shell with the man page passed as argument. (See [git-help[1]](git-help).) man.<tool>.path Override the path for the given tool that may be used to display help in the `man` format. See [git-help[1]](git-help). merge.conflictStyle Specify the style in which conflicted hunks are written out to working tree files upon merge. The default is "merge", which shows a `<<<<<<<` conflict marker, changes made by one side, a `=======` marker, changes made by the other side, and then a `>>>>>>>` marker. An alternate style, "diff3", adds a `|||||||` marker and the original text before the `=======` marker. The "merge" style tends to produce smaller conflict regions than diff3, both because of the exclusion of the original text, and because when a subset of lines match on the two sides they are just pulled out of the conflict region. Another alternate style, "zdiff3", is similar to diff3 but removes matching lines on the two sides from the conflict region when those matching lines appear near either the beginning or end of a conflict region. merge.defaultToUpstream If merge is called without any commit argument, merge the upstream branches configured for the current branch by using their last observed values stored in their remote-tracking branches. The values of the `branch.<current branch>.merge` that name the branches at the remote named by `branch.<current branch>.remote` are consulted, and then they are mapped via `remote.<remote>.fetch` to their corresponding remote-tracking branches, and the tips of these tracking branches are merged. Defaults to true. merge.ff By default, Git does not create an extra merge commit when merging a commit that is a descendant of the current commit. Instead, the tip of the current branch is fast-forwarded. When set to `false`, this variable tells Git to create an extra merge commit in such a case (equivalent to giving the `--no-ff` option from the command line). When set to `only`, only such fast-forward merges are allowed (equivalent to giving the `--ff-only` option from the command line). merge.verifySignatures If true, this is equivalent to the --verify-signatures command line option. See [git-merge[1]](git-merge) for details. merge.branchdesc In addition to branch names, populate the log message with the branch description text associated with them. Defaults to false. merge.log In addition to branch names, populate the log message with at most the specified number of one-line descriptions from the actual commits that are being merged. Defaults to false, and true is a synonym for 20. merge.suppressDest By adding a glob that matches the names of integration branches to this multi-valued configuration variable, the default merge message computed for merges into these integration branches will omit "into <branch name>" from its title. An element with an empty value can be used to clear the list of globs accumulated from previous configuration entries. When there is no `merge.suppressDest` variable defined, the default value of `master` is used for backward compatibility. merge.renameLimit The number of files to consider in the exhaustive portion of rename detection during a merge. If not specified, defaults to the value of diff.renameLimit. If neither merge.renameLimit nor diff.renameLimit are specified, currently defaults to 7000. This setting has no effect if rename detection is turned off. merge.renames Whether Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. Defaults to the value of diff.renames. merge.directoryRenames Whether Git detects directory renames, affecting what happens at merge time to new files added to a directory on one side of history when that directory was renamed on the other side of history. If merge.directoryRenames is set to "false", directory rename detection is disabled, meaning that such new files will be left behind in the old directory. If set to "true", directory rename detection is enabled, meaning that such new files will be moved into the new directory. If set to "conflict", a conflict will be reported for such paths. If merge.renames is false, merge.directoryRenames is ignored and treated as false. Defaults to "conflict". merge.renormalize Tell Git that canonical representation of files in the repository has changed over time (e.g. earlier commits record text files with CRLF line endings, but recent ones use LF line endings). In such a repository, Git can convert the data recorded in commits to a canonical form before performing a merge to reduce unnecessary conflicts. For more information, see section "Merging branches with differing checkin/checkout attributes" in [gitattributes[5]](gitattributes). merge.stat Whether to print the diffstat between ORIG\_HEAD and the merge result at the end of the merge. True by default. merge.autoStash When set to true, automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run merge on a dirty worktree. However, use with care: the final stash application after a successful merge might result in non-trivial conflicts. This option can be overridden by the `--no-autostash` and `--autostash` options of [git-merge[1]](git-merge). Defaults to false. merge.tool Controls which merge tool is used by [git-mergetool[1]](git-mergetool). The list below shows the valid built-in values. Any other value is treated as a custom merge tool and requires that a corresponding mergetool.<tool>.cmd variable is defined. merge.guitool Controls which merge tool is used by [git-mergetool[1]](git-mergetool) when the -g/--gui flag is specified. The list below shows the valid built-in values. Any other value is treated as a custom merge tool and requires that a corresponding mergetool.<guitool>.cmd variable is defined. * araxis * bc * codecompare * deltawalker * diffmerge * diffuse * ecmerge * emerge * examdiff * guiffy * gvimdiff * kdiff3 * meld * nvimdiff * opendiff * p4merge * smerge * tkdiff * tortoisemerge * vimdiff * winmerge * xxdiff merge.verbosity Controls the amount of output shown by the recursive merge strategy. Level 0 outputs nothing except a final error message if conflicts were detected. Level 1 outputs only conflicts, 2 outputs conflicts and file changes. Level 5 and above outputs debugging information. The default is level 2. Can be overridden by the `GIT_MERGE_VERBOSITY` environment variable. merge.<driver>.name Defines a human-readable name for a custom low-level merge driver. See [gitattributes[5]](gitattributes) for details. merge.<driver>.driver Defines the command that implements a custom low-level merge driver. See [gitattributes[5]](gitattributes) for details. merge.<driver>.recursive Names a low-level merge driver to be used when performing an internal merge between common ancestors. See [gitattributes[5]](gitattributes) for details. mergetool.<tool>.path Override the path for the given tool. This is useful in case your tool is not in the PATH. mergetool.<tool>.cmd Specify the command to invoke the specified merge tool. The specified command is evaluated in shell with the following variables available: `BASE` is the name of a temporary file containing the common base of the files to be merged, if available; `LOCAL` is the name of a temporary file containing the contents of the file on the current branch; `REMOTE` is the name of a temporary file containing the contents of the file from the branch being merged; `MERGED` contains the name of the file to which the merge tool should write the results of a successful merge. mergetool.<tool>.hideResolved Allows the user to override the global `mergetool.hideResolved` value for a specific tool. See `mergetool.hideResolved` for the full description. mergetool.<tool>.trustExitCode For a custom merge command, specify whether the exit code of the merge command can be used to determine whether the merge was successful. If this is not set to true then the merge target file timestamp is checked and the merge assumed to have been successful if the file has been updated, otherwise the user is prompted to indicate the success of the merge. mergetool.meld.hasOutput Older versions of `meld` do not support the `--output` option. Git will attempt to detect whether `meld` supports `--output` by inspecting the output of `meld --help`. Configuring `mergetool.meld.hasOutput` will make Git skip these checks and use the configured value instead. Setting `mergetool.meld.hasOutput` to `true` tells Git to unconditionally use the `--output` option, and `false` avoids using `--output`. mergetool.meld.useAutoMerge When the `--auto-merge` is given, meld will merge all non-conflicting parts automatically, highlight the conflicting parts and wait for user decision. Setting `mergetool.meld.useAutoMerge` to `true` tells Git to unconditionally use the `--auto-merge` option with `meld`. Setting this value to `auto` makes git detect whether `--auto-merge` is supported and will only use `--auto-merge` when available. A value of `false` avoids using `--auto-merge` altogether, and is the default value. mergetool.vimdiff.layout The vimdiff backend uses this variable to control how its split windows look like. Applies even if you are using Neovim (`nvim`) or gVim (`gvim`) as the merge tool. See BACKEND SPECIFIC HINTS section in [git-mergetool[1]](git-mergetool). for details. mergetool.hideResolved During a merge Git will automatically resolve as many conflicts as possible and write the `MERGED` file containing conflict markers around any conflicts that it cannot resolve; `LOCAL` and `REMOTE` normally represent the versions of the file from before Git’s conflict resolution. This flag causes `LOCAL` and `REMOTE` to be overwritten so that only the unresolved conflicts are presented to the merge tool. Can be configured per-tool via the `mergetool.<tool>.hideResolved` configuration variable. Defaults to `false`. mergetool.keepBackup After performing a merge, the original file with conflict markers can be saved as a file with a `.orig` extension. If this variable is set to `false` then this file is not preserved. Defaults to `true` (i.e. keep the backup files). mergetool.keepTemporaries When invoking a custom merge tool, Git uses a set of temporary files to pass to the tool. If the tool returns an error and this variable is set to `true`, then these temporary files will be preserved, otherwise they will be removed after the tool has exited. Defaults to `false`. mergetool.writeToTemp Git writes temporary `BASE`, `LOCAL`, and `REMOTE` versions of conflicting files in the worktree by default. Git will attempt to use a temporary directory for these files when set `true`. Defaults to `false`. mergetool.prompt Prompt before each invocation of the merge resolution program. notes.mergeStrategy Which merge strategy to choose by default when resolving notes conflicts. Must be one of `manual`, `ours`, `theirs`, `union`, or `cat_sort_uniq`. Defaults to `manual`. See "NOTES MERGE STRATEGIES" section of [git-notes[1]](git-notes) for more information on each strategy. This setting can be overridden by passing the `--strategy` option to [git-notes[1]](git-notes). notes.<name>.mergeStrategy Which merge strategy to choose when doing a notes merge into refs/notes/<name>. This overrides the more general "notes.mergeStrategy". See the "NOTES MERGE STRATEGIES" section in [git-notes[1]](git-notes) for more information on the available strategies. notes.displayRef Which ref (or refs, if a glob or specified more than once), in addition to the default set by `core.notesRef` or `GIT_NOTES_REF`, to read notes from when showing commit messages with the `git log` family of commands. This setting can be overridden with the `GIT_NOTES_DISPLAY_REF` environment variable, which must be a colon separated list of refs or globs. A warning will be issued for refs that do not exist, but a glob that does not match any refs is silently ignored. This setting can be disabled by the `--no-notes` option to the `git log` family of commands, or by the `--notes=<ref>` option accepted by those commands. The effective value of "core.notesRef" (possibly overridden by GIT\_NOTES\_REF) is also implicitly added to the list of refs to be displayed. notes.rewrite.<command> When rewriting commits with <command> (currently `amend` or `rebase`), if this variable is `false`, git will not copy notes from the original to the rewritten commit. Defaults to `true`. See also "`notes.rewriteRef`" below. This setting can be overridden with the `GIT_NOTES_REWRITE_REF` environment variable, which must be a colon separated list of refs or globs. notes.rewriteMode When copying notes during a rewrite (see the "notes.rewrite.<command>" option), determines what to do if the target commit already has a note. Must be one of `overwrite`, `concatenate`, `cat_sort_uniq`, or `ignore`. Defaults to `concatenate`. This setting can be overridden with the `GIT_NOTES_REWRITE_MODE` environment variable. notes.rewriteRef When copying notes during a rewrite, specifies the (fully qualified) ref whose notes should be copied. May be a glob, in which case notes in all matching refs will be copied. You may also specify this configuration several times. Does not have a default value; you must configure this variable to enable note rewriting. Set it to `refs/notes/commits` to enable rewriting for the default commit notes. Can be overridden with the `GIT_NOTES_REWRITE_REF` environment variable. See `notes.rewrite.<command>` above for a further description of its format. pack.window The size of the window used by [git-pack-objects[1]](git-pack-objects) when no window size is given on the command line. Defaults to 10. pack.depth The maximum delta depth used by [git-pack-objects[1]](git-pack-objects) when no maximum depth is given on the command line. Defaults to 50. Maximum value is 4095. pack.windowMemory The maximum size of memory that is consumed by each thread in [git-pack-objects[1]](git-pack-objects) for pack window memory when no limit is given on the command line. The value can be suffixed with "k", "m", or "g". When left unconfigured (or set explicitly to 0), there will be no limit. pack.compression An integer -1..9, indicating the compression level for objects in a pack file. -1 is the zlib default. 0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest. If not set, defaults to core.compression. If that is not set, defaults to -1, the zlib default, which is "a default compromise between speed and compression (currently equivalent to level 6)." Note that changing the compression level will not automatically recompress all existing objects. You can force recompression by passing the -F option to [git-repack[1]](git-repack). pack.allowPackReuse When true, and when reachability bitmaps are enabled, pack-objects will try to send parts of the bitmapped packfile verbatim. This can reduce memory and CPU usage to serve fetches, but might result in sending a slightly larger pack. Defaults to true. pack.island An extended regular expression configuring a set of delta islands. See "DELTA ISLANDS" in [git-pack-objects[1]](git-pack-objects) for details. pack.islandCore Specify an island name which gets to have its objects be packed first. This creates a kind of pseudo-pack at the front of one pack, so that the objects from the specified island are hopefully faster to copy into any pack that should be served to a user requesting these objects. In practice this means that the island specified should likely correspond to what is the most commonly cloned in the repo. See also "DELTA ISLANDS" in [git-pack-objects[1]](git-pack-objects). pack.deltaCacheSize The maximum memory in bytes used for caching deltas in [git-pack-objects[1]](git-pack-objects) before writing them out to a pack. This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Repacking large repositories on machines which are tight with memory might be badly impacted by this though, especially if this cache pushes the system into swapping. A value of 0 means no limit. The smallest size of 1 byte may be used to virtually disable this cache. Defaults to 256 MiB. pack.deltaCacheLimit The maximum size of a delta, that is cached in [git-pack-objects[1]](git-pack-objects). This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Defaults to 1000. Maximum value is 65535. pack.threads Specifies the number of threads to spawn when searching for best delta matches. This requires that [git-pack-objects[1]](git-pack-objects) be compiled with pthreads otherwise this option is ignored with a warning. This is meant to reduce packing time on multiprocessor machines. The required amount of memory for the delta search window is however multiplied by the number of threads. Specifying 0 will cause Git to auto-detect the number of CPU’s and set the number of threads accordingly. pack.indexVersion Specify the default pack index version. Valid values are 1 for legacy pack index used by Git versions prior to 1.5.2, and 2 for the new pack index with capabilities for packs larger than 4 GB as well as proper protection against the repacking of corrupted packs. Version 2 is the default. Note that version 2 is enforced and this config option ignored whenever the corresponding pack is larger than 2 GB. If you have an old Git that does not understand the version 2 `*.idx` file, cloning or fetching over a non native protocol (e.g. "http") that will copy both `*.pack` file and corresponding `*.idx` file from the other side may give you a repository that cannot be accessed with your older version of Git. If the `*.pack` file is smaller than 2 GB, however, you can use [git-index-pack[1]](git-index-pack) on the \*.pack file to regenerate the `*.idx` file. pack.packSizeLimit The maximum size of a pack. This setting only affects packing to a file when repacking, i.e. the git:// protocol is unaffected. It can be overridden by the `--max-pack-size` option of [git-repack[1]](git-repack). Reaching this limit results in the creation of multiple packfiles. Note that this option is rarely useful, and may result in a larger total on-disk size (because Git will not store deltas between packs), as well as worse runtime performance (object lookup within multiple packs is slower than a single pack, and optimizations like reachability bitmaps cannot cope with multiple packs). If you need to actively run Git using smaller packfiles (e.g., because your filesystem does not support large files), this option may help. But if your goal is to transmit a packfile over a medium that supports limited sizes (e.g., removable media that cannot store the whole repository), you are likely better off creating a single large packfile and splitting it using a generic multi-volume archive tool (e.g., Unix `split`). The minimum size allowed is limited to 1 MiB. The default is unlimited. Common unit suffixes of `k`, `m`, or `g` are supported. pack.useBitmaps When true, git will use pack bitmaps (if available) when packing to stdout (e.g., during the server side of a fetch). Defaults to true. You should not generally need to turn this off unless you are debugging pack bitmaps. pack.useSparse When true, git will default to using the `--sparse` option in `git pack-objects` when the `--revs` option is present. This algorithm only walks trees that appear in paths that introduce new objects. This can have significant performance benefits when computing a pack to send a small change. However, it is possible that extra objects are added to the pack-file if the included commits contain certain types of direct renames. Default is `true`. pack.preferBitmapTips When selecting which commits will receive bitmaps, prefer a commit at the tip of any reference that is a suffix of any value of this configuration over any other commits in the "selection window". Note that setting this configuration to `refs/foo` does not mean that the commits at the tips of `refs/foo/bar` and `refs/foo/baz` will necessarily be selected. This is because commits are selected for bitmaps from within a series of windows of variable length. If a commit at the tip of any reference which is a suffix of any value of this configuration is seen in a window, it is immediately given preference over any other commit in that window. pack.writeBitmaps (deprecated) This is a deprecated synonym for `repack.writeBitmaps`. pack.writeBitmapHashCache When true, git will include a "hash cache" section in the bitmap index (if one is written). This cache can be used to feed git’s delta heuristics, potentially leading to better deltas between bitmapped and non-bitmapped objects (e.g., when serving a fetch between an older, bitmapped pack and objects that have been pushed since the last gc). The downside is that it consumes 4 bytes per object of disk space. Defaults to true. When writing a multi-pack reachability bitmap, no new namehashes are computed; instead, any namehashes stored in an existing bitmap are permuted into their appropriate location when writing a new bitmap. pack.writeBitmapLookupTable When true, Git will include a "lookup table" section in the bitmap index (if one is written). This table is used to defer loading individual bitmaps as late as possible. This can be beneficial in repositories that have relatively large bitmap indexes. Defaults to false. pack.writeReverseIndex When true, git will write a corresponding .rev file (see: [gitformat-pack[5]](gitformat-pack)) for each new packfile that it writes in all places except for [git-fast-import[1]](git-fast-import) and in the bulk checkin mechanism. Defaults to false. pager.<cmd> If the value is boolean, turns on or off pagination of the output of a particular Git subcommand when writing to a tty. Otherwise, turns on pagination for the subcommand using the pager specified by the value of `pager.<cmd>`. If `--paginate` or `--no-pager` is specified on the command line, it takes precedence over this option. To disable pagination for all commands, set `core.pager` or `GIT_PAGER` to `cat`. pretty.<name> Alias for a --pretty= format string, as specified in [git-log[1]](git-log). Any aliases defined here can be used just as the built-in pretty formats could. For example, running `git config pretty.changelog "format:* %H %s"` would cause the invocation `git log --pretty=changelog` to be equivalent to running `git log "--pretty=format:* %H %s"`. Note that an alias with the same name as a built-in format will be silently ignored. protocol.allow If set, provide a user defined default policy for all protocols which don’t explicitly have a policy (`protocol.<name>.allow`). By default, if unset, known-safe protocols (http, https, git, ssh) have a default policy of `always`, known-dangerous protocols (ext) have a default policy of `never`, and all other protocols (including file) have a default policy of `user`. Supported policies: * `always` - protocol is always able to be used. * `never` - protocol is never able to be used. * `user` - protocol is only able to be used when `GIT_PROTOCOL_FROM_USER` is either unset or has a value of 1. This policy should be used when you want a protocol to be directly usable by the user but don’t want it used by commands which execute clone/fetch/push commands without user input, e.g. recursive submodule initialization. protocol.<name>.allow Set a policy to be used by protocol `<name>` with clone/fetch/push commands. See `protocol.allow` above for the available policies. The protocol names currently used by git are: * `file`: any local file-based path (including `file://` URLs, or local paths) * `git`: the anonymous git protocol over a direct TCP connection (or proxy, if configured) * `ssh`: git over ssh (including `host:path` syntax, `ssh://`, etc). * `http`: git over http, both "smart http" and "dumb http". Note that this does `not` include `https`; if you want to configure both, you must do so individually. * any external helpers are named by their protocol (e.g., use `hg` to allow the `git-remote-hg` helper) protocol.version If set, clients will attempt to communicate with a server using the specified protocol version. If the server does not support it, communication falls back to version 0. If unset, the default is `2`. Supported versions: * `0` - the original wire protocol. * `1` - the original wire protocol with the addition of a version string in the initial response from the server. * `2` - Wire protocol version 2, see [gitprotocol-v2[5]](gitprotocol-v2). pull.ff By default, Git does not create an extra merge commit when merging a commit that is a descendant of the current commit. Instead, the tip of the current branch is fast-forwarded. When set to `false`, this variable tells Git to create an extra merge commit in such a case (equivalent to giving the `--no-ff` option from the command line). When set to `only`, only such fast-forward merges are allowed (equivalent to giving the `--ff-only` option from the command line). This setting overrides `merge.ff` when pulling. pull.rebase When true, rebase branches on top of the fetched branch, instead of merging the default branch from the default remote when "git pull" is run. See "branch.<name>.rebase" for setting this on a per-branch basis. When `merges` (or just `m`), pass the `--rebase-merges` option to `git rebase` so that the local merge commits are included in the rebase (see [git-rebase[1]](git-rebase) for details). When the value is `interactive` (or just `i`), the rebase is run in interactive mode. **NOTE**: this is a possibly dangerous operation; do **not** use it unless you understand the implications (see [git-rebase[1]](git-rebase) for details). pull.octopus The default merge strategy to use when pulling multiple branches at once. pull.twohead The default merge strategy to use when pulling a single branch. push.autoSetupRemote If set to "true" assume `--set-upstream` on default push when no upstream tracking exists for the current branch; this option takes effect with push.default options `simple`, `upstream`, and `current`. It is useful if by default you want new branches to be pushed to the default remote (like the behavior of `push.default=current`) and you also want the upstream tracking to be set. Workflows most likely to benefit from this option are `simple` central workflows where all branches are expected to have the same name on the remote. push.default Defines the action `git push` should take if no refspec is given (whether from the command-line, config, or elsewhere). Different values are well-suited for specific workflows; for instance, in a purely central workflow (i.e. the fetch source is equal to the push destination), `upstream` is probably what you want. Possible values are: * `nothing` - do not push anything (error out) unless a refspec is given. This is primarily meant for people who want to avoid mistakes by always being explicit. * `current` - push the current branch to update a branch with the same name on the receiving end. Works in both central and non-central workflows. * `upstream` - push the current branch back to the branch whose changes are usually integrated into the current branch (which is called `@{upstream}`). This mode only makes sense if you are pushing to the same repository you would normally pull from (i.e. central workflow). * `tracking` - This is a deprecated synonym for `upstream`. * `simple` - pushes the current branch with the same name on the remote. If you are working on a centralized workflow (pushing to the same repository you pull from, which is typically `origin`), then you need to configure an upstream branch with the same name. This mode is the default since Git 2.0, and is the safest option suited for beginners. * `matching` - push all branches having the same name on both ends. This makes the repository you are pushing to remember the set of branches that will be pushed out (e.g. if you always push `maint` and `master` there and no other branches, the repository you push to will have these two branches, and your local `maint` and `master` will be pushed there). To use this mode effectively, you have to make sure `all` the branches you would push out are ready to be pushed out before running `git push`, as the whole point of this mode is to allow you to push all of the branches in one go. If you usually finish work on only one branch and push out the result, while other branches are unfinished, this mode is not for you. Also this mode is not suitable for pushing into a shared central repository, as other people may add new branches there, or update the tip of existing branches outside your control. This used to be the default, but not since Git 2.0 (`simple` is the new default). push.followTags If set to true enable `--follow-tags` option by default. You may override this configuration at time of push by specifying `--no-follow-tags`. push.gpgSign May be set to a boolean value, or the string `if-asked`. A true value causes all pushes to be GPG signed, as if `--signed` is passed to [git-push[1]](git-push). The string `if-asked` causes pushes to be signed if the server supports it, as if `--signed=if-asked` is passed to `git push`. A false value may override a value from a lower-priority config file. An explicit command-line flag always overrides this config option. push.pushOption When no `--push-option=<option>` argument is given from the command line, `git push` behaves as if each <value> of this variable is given as `--push-option=<value>`. This is a multi-valued variable, and an empty value can be used in a higher priority configuration file (e.g. `.git/config` in a repository) to clear the values inherited from a lower priority configuration files (e.g. `$HOME/.gitconfig`). ``` Example: /etc/gitconfig push.pushoption = a push.pushoption = b ~/.gitconfig push.pushoption = c repo/.git/config push.pushoption = push.pushoption = b This will result in only b (a and c are cleared). ``` push.recurseSubmodules May be "check", "on-demand", "only", or "no", with the same behavior as that of "push --recurse-submodules". If not set, `no` is used by default, unless `submodule.recurse` is set (in which case a `true` value means `on-demand`). push.useForceIfIncludes If set to "true", it is equivalent to specifying `--force-if-includes` as an option to [git-push[1]](git-push) in the command line. Adding `--no-force-if-includes` at the time of push overrides this configuration setting. push.negotiate If set to "true", attempt to reduce the size of the packfile sent by rounds of negotiation in which the client and the server attempt to find commits in common. If "false", Git will rely solely on the server’s ref advertisement to find commits in common. push.useBitmaps If set to "false", disable use of bitmaps for "git push" even if `pack.useBitmaps` is "true", without preventing other git operations from using bitmaps. Default is true. rebase.backend Default backend to use for rebasing. Possible choices are `apply` or `merge`. In the future, if the merge backend gains all remaining capabilities of the apply backend, this setting may become unused. rebase.stat Whether to show a diffstat of what changed upstream since the last rebase. False by default. rebase.autoSquash If set to true enable `--autosquash` option by default. rebase.autoStash When set to true, automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run rebase on a dirty worktree. However, use with care: the final stash application after a successful rebase might result in non-trivial conflicts. This option can be overridden by the `--no-autostash` and `--autostash` options of [git-rebase[1]](git-rebase). Defaults to false. rebase.updateRefs If set to true enable `--update-refs` option by default. rebase.missingCommitsCheck If set to "warn", git rebase -i will print a warning if some commits are removed (e.g. a line was deleted), however the rebase will still proceed. If set to "error", it will print the previous warning and stop the rebase, `git rebase --edit-todo` can then be used to correct the error. If set to "ignore", no checking is done. To drop a commit without warning or error, use the `drop` command in the todo list. Defaults to "ignore". rebase.instructionFormat A format string, as specified in [git-log[1]](git-log), to be used for the todo list during an interactive rebase. The format will automatically have the long commit hash prepended to the format. rebase.abbreviateCommands If set to true, `git rebase` will use abbreviated command names in the todo list resulting in something like this: ``` p deadbee The oneline of the commit p fa1afe1 The oneline of the next commit ... ``` instead of: ``` pick deadbee The oneline of the commit pick fa1afe1 The oneline of the next commit ... ``` Defaults to false. rebase.rescheduleFailedExec Automatically reschedule `exec` commands that failed. This only makes sense in interactive mode (or when an `--exec` option was provided). This is the same as specifying the `--reschedule-failed-exec` option. rebase.forkPoint If set to false set `--no-fork-point` option by default. receive.advertiseAtomic By default, git-receive-pack will advertise the atomic push capability to its clients. If you don’t want to advertise this capability, set this variable to false. receive.advertisePushOptions When set to true, git-receive-pack will advertise the push options capability to its clients. False by default. receive.autogc By default, git-receive-pack will run "git-gc --auto" after receiving data from git-push and updating refs. You can stop it by setting this variable to false. receive.certNonceSeed By setting this variable to a string, `git receive-pack` will accept a `git push --signed` and verifies it by using a "nonce" protected by HMAC using this string as a secret key. receive.certNonceSlop When a `git push --signed` sent a push certificate with a "nonce" that was issued by a receive-pack serving the same repository within this many seconds, export the "nonce" found in the certificate to `GIT_PUSH_CERT_NONCE` to the hooks (instead of what the receive-pack asked the sending side to include). This may allow writing checks in `pre-receive` and `post-receive` a bit easier. Instead of checking `GIT_PUSH_CERT_NONCE_SLOP` environment variable that records by how many seconds the nonce is stale to decide if they want to accept the certificate, they only can check `GIT_PUSH_CERT_NONCE_STATUS` is `OK`. receive.fsckObjects If it is set to true, git-receive-pack will check all received objects. See `transfer.fsckObjects` for what’s checked. Defaults to false. If not set, the value of `transfer.fsckObjects` is used instead. receive.fsck.<msg-id> Acts like `fsck.<msg-id>`, but is used by [git-receive-pack[1]](git-receive-pack) instead of [git-fsck[1]](git-fsck). See the `fsck.<msg-id>` documentation for details. receive.fsck.skipList Acts like `fsck.skipList`, but is used by [git-receive-pack[1]](git-receive-pack) instead of [git-fsck[1]](git-fsck). See the `fsck.skipList` documentation for details. receive.keepAlive After receiving the pack from the client, `receive-pack` may produce no output (if `--quiet` was specified) while processing the pack, causing some networks to drop the TCP connection. With this option set, if `receive-pack` does not transmit any data in this phase for `receive.keepAlive` seconds, it will send a short keepalive packet. The default is 5 seconds; set to 0 to disable keepalives entirely. receive.unpackLimit If the number of objects received in a push is below this limit then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of `transfer.unpackLimit` is used instead. receive.maxInputSize If the size of the incoming pack stream is larger than this limit, then git-receive-pack will error out, instead of accepting the pack file. If not set or set to 0, then the size is unlimited. receive.denyDeletes If set to true, git-receive-pack will deny a ref update that deletes the ref. Use this to prevent such a ref deletion via a push. receive.denyDeleteCurrent If set to true, git-receive-pack will deny a ref update that deletes the currently checked out branch of a non-bare repository. receive.denyCurrentBranch If set to true or "refuse", git-receive-pack will deny a ref update to the currently checked out branch of a non-bare repository. Such a push is potentially dangerous because it brings the HEAD out of sync with the index and working tree. If set to "warn", print a warning of such a push to stderr, but allow the push to proceed. If set to false or "ignore", allow such pushes with no message. Defaults to "refuse". Another option is "updateInstead" which will update the working tree if pushing into the current branch. This option is intended for synchronizing working directories when one side is not easily accessible via interactive ssh (e.g. a live web site, hence the requirement that the working directory be clean). This mode also comes in handy when developing inside a VM to test and fix code on different Operating Systems. By default, "updateInstead" will refuse the push if the working tree or the index have any difference from the HEAD, but the `push-to-checkout` hook can be used to customize this. See [githooks[5]](githooks). receive.denyNonFastForwards If set to true, git-receive-pack will deny a ref update which is not a fast-forward. Use this to prevent such an update via a push, even if that push is forced. This configuration variable is set when initializing a shared repository. receive.hideRefs This variable is the same as `transfer.hideRefs`, but applies only to `receive-pack` (and so affects pushes, but not fetches). An attempt to update or delete a hidden ref by `git push` is rejected. receive.procReceiveRefs This is a multi-valued variable that defines reference prefixes to match the commands in `receive-pack`. Commands matching the prefixes will be executed by an external hook "proc-receive", instead of the internal `execute_commands` function. If this variable is not defined, the "proc-receive" hook will never be used, and all commands will be executed by the internal `execute_commands` function. For example, if this variable is set to "refs/for", pushing to reference such as "refs/for/master" will not create or update a reference named "refs/for/master", but may create or update a pull request directly by running the hook "proc-receive". Optional modifiers can be provided in the beginning of the value to filter commands for specific actions: create (a), modify (m), delete (d). A `!` can be included in the modifiers to negate the reference prefix entry. E.g.: ``` git config --system --add receive.procReceiveRefs ad:refs/heads git config --system --add receive.procReceiveRefs !:refs/heads ``` receive.updateServerInfo If set to true, git-receive-pack will run git-update-server-info after receiving data from git-push and updating refs. receive.shallowUpdate If set to true, .git/shallow can be updated when new refs require new shallow roots. Otherwise those refs are rejected. remote.pushDefault The remote to push to by default. Overrides `branch.<name>.remote` for all branches, and is overridden by `branch.<name>.pushRemote` for specific branches. remote.<name>.url The URL of a remote repository. See [git-fetch[1]](git-fetch) or [git-push[1]](git-push). remote.<name>.pushurl The push URL of a remote repository. See [git-push[1]](git-push). remote.<name>.proxy For remotes that require curl (http, https and ftp), the URL to the proxy to use for that remote. Set to the empty string to disable proxying for that remote. remote.<name>.proxyAuthMethod For remotes that require curl (http, https and ftp), the method to use for authenticating against the proxy in use (probably set in `remote.<name>.proxy`). See `http.proxyAuthMethod`. remote.<name>.fetch The default set of "refspec" for [git-fetch[1]](git-fetch). See [git-fetch[1]](git-fetch). remote.<name>.push The default set of "refspec" for [git-push[1]](git-push). See [git-push[1]](git-push). remote.<name>.mirror If true, pushing to this remote will automatically behave as if the `--mirror` option was given on the command line. remote.<name>.skipDefaultUpdate If true, this remote will be skipped by default when updating using [git-fetch[1]](git-fetch) or the `update` subcommand of [git-remote[1]](git-remote). remote.<name>.skipFetchAll If true, this remote will be skipped by default when updating using [git-fetch[1]](git-fetch) or the `update` subcommand of [git-remote[1]](git-remote). remote.<name>.receivepack The default program to execute on the remote side when pushing. See option --receive-pack of [git-push[1]](git-push). remote.<name>.uploadpack The default program to execute on the remote side when fetching. See option --upload-pack of [git-fetch-pack[1]](git-fetch-pack). remote.<name>.tagOpt Setting this value to --no-tags disables automatic tag following when fetching from remote <name>. Setting it to --tags will fetch every tag from remote <name>, even if they are not reachable from remote branch heads. Passing these flags directly to [git-fetch[1]](git-fetch) can override this setting. See options --tags and --no-tags of [git-fetch[1]](git-fetch). remote.<name>.vcs Setting this to a value <vcs> will cause Git to interact with the remote with the git-remote-<vcs> helper. remote.<name>.prune When set to true, fetching from this remote by default will also remove any remote-tracking references that no longer exist on the remote (as if the `--prune` option was given on the command line). Overrides `fetch.prune` settings, if any. remote.<name>.pruneTags When set to true, fetching from this remote by default will also remove any local tags that no longer exist on the remote if pruning is activated in general via `remote.<name>.prune`, `fetch.prune` or `--prune`. Overrides `fetch.pruneTags` settings, if any. See also `remote.<name>.prune` and the PRUNING section of [git-fetch[1]](git-fetch). remote.<name>.promisor When set to true, this remote will be used to fetch promisor objects. remote.<name>.partialclonefilter The filter that will be applied when fetching from this promisor remote. Changing or clearing this value will only affect fetches for new commits. To fetch associated objects for commits already present in the local object database, use the `--refetch` option of [git-fetch[1]](git-fetch). remotes.<group> The list of remotes which are fetched by "git remote update <group>". See [git-remote[1]](git-remote). repack.useDeltaBaseOffset By default, [git-repack[1]](git-repack) creates packs that use delta-base offset. If you need to share your repository with Git older than version 1.4.4, either directly or via a dumb protocol such as http, then you need to set this option to "false" and repack. Access from old Git versions over the native protocol are unaffected by this option. repack.packKeptObjects If set to true, makes `git repack` act as if `--pack-kept-objects` was passed. See [git-repack[1]](git-repack) for details. Defaults to `false` normally, but `true` if a bitmap index is being written (either via `--write-bitmap-index` or `repack.writeBitmaps`). repack.useDeltaIslands If set to true, makes `git repack` act as if `--delta-islands` was passed. Defaults to `false`. repack.writeBitmaps When true, git will write a bitmap index when packing all objects to disk (e.g., when `git repack -a` is run). This index can speed up the "counting objects" phase of subsequent packs created for clones and fetches, at the cost of some disk space and extra time spent on the initial repack. This has no effect if multiple packfiles are created. Defaults to true on bare repos, false otherwise. repack.updateServerInfo If set to false, [git-repack[1]](git-repack) will not run [git-update-server-info[1]](git-update-server-info). Defaults to true. Can be overridden when true by the `-n` option of [git-repack[1]](git-repack). repack.cruftWindow repack.cruftWindowMemory repack.cruftDepth repack.cruftThreads Parameters used by [git-pack-objects[1]](git-pack-objects) when generating a cruft pack and the respective parameters are not given over the command line. See similarly named `pack.*` configuration variables for defaults and meaning. rerere.autoUpdate When set to true, `git-rerere` updates the index with the resulting contents after it cleanly resolves conflicts using previously recorded resolution. Defaults to false. rerere.enabled Activate recording of resolved conflicts, so that identical conflict hunks can be resolved automatically, should they be encountered again. By default, [git-rerere[1]](git-rerere) is enabled if there is an `rr-cache` directory under the `$GIT_DIR`, e.g. if "rerere" was previously used in the repository. revert.reference Setting this variable to true makes `git revert` behave as if the `--reference` option is given. safe.bareRepository Specifies which bare repositories Git will work with. The currently supported values are: * `all`: Git works with all bare repositories. This is the default. * `explicit`: Git only works with bare repositories specified via the top-level `--git-dir` command-line option, or the `GIT_DIR` environment variable (see [git[1]](git)). If you do not use bare repositories in your workflow, then it may be beneficial to set `safe.bareRepository` to `explicit` in your global config. This will protect you from attacks that involve cloning a repository that contains a bare repository and running a Git command within that directory. This config setting is only respected in protected configuration (see [SCOPES](#SCOPES)). This prevents the untrusted repository from tampering with this value. safe.directory These config entries specify Git-tracked directories that are considered safe even if they are owned by someone other than the current user. By default, Git will refuse to even parse a Git config of a repository owned by someone else, let alone run its hooks, and this config setting allows users to specify exceptions, e.g. for intentionally shared repositories (see the `--shared` option in [git-init[1]](git-init)). This is a multi-valued setting, i.e. you can add more than one directory via `git config --add`. To reset the list of safe directories (e.g. to override any such directories specified in the system config), add a `safe.directory` entry with an empty value. This config setting is only respected in protected configuration (see [SCOPES](#SCOPES)). This prevents the untrusted repository from tampering with this value. The value of this setting is interpolated, i.e. `~/<path>` expands to a path relative to the home directory and `%(prefix)/<path>` expands to a path relative to Git’s (runtime) prefix. To completely opt-out of this security check, set `safe.directory` to the string `*`. This will allow all repositories to be treated as if their directory was listed in the `safe.directory` list. If `safe.directory=*` is set in system config and you want to re-enable this protection, then initialize your list with an empty value before listing the repositories that you deem safe. As explained, Git only allows you to access repositories owned by yourself, i.e. the user who is running Git, by default. When Git is running as `root` in a non Windows platform that provides sudo, however, git checks the SUDO\_UID environment variable that sudo creates and will allow access to the uid recorded as its value in addition to the id from `root`. This is to make it easy to perform a common sequence during installation "make && sudo make install". A git process running under `sudo` runs as `root` but the `sudo` command exports the environment variable to record which id the original user has. If that is not what you would prefer and want git to only trust repositories that are owned by root instead, then you can remove the `SUDO_UID` variable from root’s environment before invoking git. sendemail.identity A configuration identity. When given, causes values in the `sendemail.<identity>` subsection to take precedence over values in the `sendemail` section. The default identity is the value of `sendemail.identity`. sendemail.smtpEncryption See [git-send-email[1]](git-send-email) for description. Note that this setting is not subject to the `identity` mechanism. sendemail.smtpsslcertpath Path to ca-certificates (either a directory or a single file). Set it to an empty string to disable certificate verification. sendemail.<identity>.\* Identity-specific versions of the `sendemail.*` parameters found below, taking precedence over those when this identity is selected, through either the command-line or `sendemail.identity`. sendemail.multiEdit If true (default), a single editor instance will be spawned to edit files you have to edit (patches when `--annotate` is used, and the summary when `--compose` is used). If false, files will be edited one after the other, spawning a new editor each time. sendemail.confirm Sets the default for whether to confirm before sending. Must be one of `always`, `never`, `cc`, `compose`, or `auto`. See `--confirm` in the [git-send-email[1]](git-send-email) documentation for the meaning of these values. sendemail.aliasesFile To avoid typing long email addresses, point this to one or more email aliases files. You must also supply `sendemail.aliasFileType`. sendemail.aliasFileType Format of the file(s) specified in sendemail.aliasesFile. Must be one of `mutt`, `mailrc`, `pine`, `elm`, or `gnus`, or `sendmail`. What an alias file in each format looks like can be found in the documentation of the email program of the same name. The differences and limitations from the standard formats are described below: sendmail * Quoted aliases and quoted addresses are not supported: lines that contain a `"` symbol are ignored. * Redirection to a file (`/path/name`) or pipe (`|command`) is not supported. * File inclusion (`:include: /path/name`) is not supported. * Warnings are printed on the standard error output for any explicitly unsupported constructs, and any other lines that are not recognized by the parser. sendemail.annotate sendemail.bcc sendemail.cc sendemail.ccCmd sendemail.chainReplyTo sendemail.envelopeSender sendemail.from sendemail.signedoffbycc sendemail.smtpPass sendemail.suppresscc sendemail.suppressFrom sendemail.to sendemail.tocmd sendemail.smtpDomain sendemail.smtpServer sendemail.smtpServerPort sendemail.smtpServerOption sendemail.smtpUser sendemail.thread sendemail.transferEncoding sendemail.validate sendemail.xmailer These configuration variables all provide a default for [git-send-email[1]](git-send-email) command-line options. See its documentation for details. sendemail.signedoffcc (deprecated) Deprecated alias for `sendemail.signedoffbycc`. sendemail.smtpBatchSize Number of messages to be sent per connection, after that a relogin will happen. If the value is 0 or undefined, send all messages in one connection. See also the `--batch-size` option of [git-send-email[1]](git-send-email). sendemail.smtpReloginDelay Seconds wait before reconnecting to smtp server. See also the `--relogin-delay` option of [git-send-email[1]](git-send-email). sendemail.forbidSendmailVariables To avoid common misconfiguration mistakes, [git-send-email[1]](git-send-email) will abort with a warning if any configuration options for "sendmail" exist. Set this variable to bypass the check. sequence.editor Text editor used by `git rebase -i` for editing the rebase instruction file. The value is meant to be interpreted by the shell when it is used. It can be overridden by the `GIT_SEQUENCE_EDITOR` environment variable. When not configured the default commit message editor is used instead. showBranch.default The default set of branches for [git-show-branch[1]](git-show-branch). See [git-show-branch[1]](git-show-branch). sparse.expectFilesOutsideOfPatterns Typically with sparse checkouts, files not matching any sparsity patterns are marked with a SKIP\_WORKTREE bit in the index and are missing from the working tree. Accordingly, Git will ordinarily check whether files with the SKIP\_WORKTREE bit are in fact present in the working tree contrary to expectations. If Git finds any, it marks those paths as present by clearing the relevant SKIP\_WORKTREE bits. This option can be used to tell Git that such present-despite-skipped files are expected and to stop checking for them. The default is `false`, which allows Git to automatically recover from the list of files in the index and working tree falling out of sync. Set this to `true` if you are in a setup where some external factor relieves Git of the responsibility for maintaining the consistency between the presence of working tree files and sparsity patterns. For example, if you have a Git-aware virtual file system that has a robust mechanism for keeping the working tree and the sparsity patterns up to date based on access patterns. Regardless of this setting, Git does not check for present-despite-skipped files unless sparse checkout is enabled, so this config option has no effect unless `core.sparseCheckout` is `true`. splitIndex.maxPercentChange When the split index feature is used, this specifies the percent of entries the split index can contain compared to the total number of entries in both the split index and the shared index before a new shared index is written. The value should be between 0 and 100. If the value is 0 then a new shared index is always written, if it is 100 a new shared index is never written. By default the value is 20, so a new shared index is written if the number of entries in the split index would be greater than 20 percent of the total number of entries. See [git-update-index[1]](git-update-index). splitIndex.sharedIndexExpire When the split index feature is used, shared index files that were not modified since the time this variable specifies will be removed when a new shared index file is created. The value "now" expires all entries immediately, and "never" suppresses expiration altogether. The default value is "2.weeks.ago". Note that a shared index file is considered modified (for the purpose of expiration) each time a new split-index file is either created based on it or read from it. See [git-update-index[1]](git-update-index). ssh.variant By default, Git determines the command line arguments to use based on the basename of the configured SSH command (configured using the environment variable `GIT_SSH` or `GIT_SSH_COMMAND` or the config setting `core.sshCommand`). If the basename is unrecognized, Git will attempt to detect support of OpenSSH options by first invoking the configured SSH command with the `-G` (print configuration) option and will subsequently use OpenSSH options (if that is successful) or no options besides the host and remote command (if it fails). The config variable `ssh.variant` can be set to override this detection. Valid values are `ssh` (to use OpenSSH options), `plink`, `putty`, `tortoiseplink`, `simple` (no options except the host and remote command). The default auto-detection can be explicitly requested using the value `auto`. Any other value is treated as `ssh`. This setting can also be overridden via the environment variable `GIT_SSH_VARIANT`. The current command-line parameters used for each variant are as follows: * `ssh` - [-p port] [-4] [-6] [-o option] [username@]host command * `simple` - [username@]host command * `plink` or `putty` - [-P port] [-4] [-6] [username@]host command * `tortoiseplink` - [-P port] [-4] [-6] -batch [username@]host command Except for the `simple` variant, command-line parameters are likely to change as git gains new features. status.relativePaths By default, [git-status[1]](git-status) shows paths relative to the current directory. Setting this variable to `false` shows paths relative to the repository root (this was the default for Git prior to v1.5.4). status.short Set to true to enable --short by default in [git-status[1]](git-status). The option --no-short takes precedence over this variable. status.branch Set to true to enable --branch by default in [git-status[1]](git-status). The option --no-branch takes precedence over this variable. status.aheadBehind Set to true to enable `--ahead-behind` and false to enable `--no-ahead-behind` by default in [git-status[1]](git-status) for non-porcelain status formats. Defaults to true. status.displayCommentPrefix If set to true, [git-status[1]](git-status) will insert a comment prefix before each output line (starting with `core.commentChar`, i.e. `#` by default). This was the behavior of [git-status[1]](git-status) in Git 1.8.4 and previous. Defaults to false. status.renameLimit The number of files to consider when performing rename detection in [git-status[1]](git-status) and [git-commit[1]](git-commit). Defaults to the value of diff.renameLimit. status.renames Whether and how Git detects renames in [git-status[1]](git-status) and [git-commit[1]](git-commit) . If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. If set to "copies" or "copy", Git will detect copies, as well. Defaults to the value of diff.renames. status.showStash If set to true, [git-status[1]](git-status) will display the number of entries currently stashed away. Defaults to false. status.showUntrackedFiles By default, [git-status[1]](git-status) and [git-commit[1]](git-commit) show files which are not currently tracked by Git. Directories which contain only untracked files, are shown with the directory name only. Showing untracked files means that Git needs to lstat() all the files in the whole repository, which might be slow on some systems. So, this variable controls how the commands displays the untracked files. Possible values are: * `no` - Show no untracked files. * `normal` - Show untracked files and directories. * `all` - Show also individual files in untracked directories. If this variable is not specified, it defaults to `normal`. This variable can be overridden with the -u|--untracked-files option of [git-status[1]](git-status) and [git-commit[1]](git-commit). status.submoduleSummary Defaults to false. If this is set to a non zero number or true (identical to -1 or an unlimited number), the submodule summary will be enabled and a summary of commits for modified submodules will be shown (see --summary-limit option of [git-submodule[1]](git-submodule)). Please note that the summary output command will be suppressed for all submodules when `diff.ignoreSubmodules` is set to `all` or only for those submodules where `submodule.<name>.ignore=all`. The only exception to that rule is that status and commit will show staged submodule changes. To also view the summary for ignored submodules you can either use the --ignore-submodules=dirty command-line option or the `git submodule summary` command, which shows a similar output but does not honor these settings. stash.showIncludeUntracked If this is set to true, the `git stash show` command will show the untracked files of a stash entry. Defaults to false. See description of `show` command in [git-stash[1]](git-stash). stash.showPatch If this is set to true, the `git stash show` command without an option will show the stash entry in patch form. Defaults to false. See description of `show` command in [git-stash[1]](git-stash). stash.showStat If this is set to true, the `git stash show` command without an option will show diffstat of the stash entry. Defaults to true. See description of `show` command in [git-stash[1]](git-stash). submodule.<name>.url The URL for a submodule. This variable is copied from the .gitmodules file to the git config via `git submodule init`. The user can change the configured URL before obtaining the submodule via `git submodule update`. If neither submodule.<name>.active or submodule.active are set, the presence of this variable is used as a fallback to indicate whether the submodule is of interest to git commands. See [git-submodule[1]](git-submodule) and [gitmodules[5]](gitmodules) for details. submodule.<name>.update The method by which a submodule is updated by `git submodule update`, which is the only affected command, others such as `git checkout --recurse-submodules` are unaffected. It exists for historical reasons, when `git submodule` was the only command to interact with submodules; settings like `submodule.active` and `pull.rebase` are more specific. It is populated by `git submodule init` from the [gitmodules[5]](gitmodules) file. See description of `update` command in [git-submodule[1]](git-submodule). submodule.<name>.branch The remote branch name for a submodule, used by `git submodule update --remote`. Set this option to override the value found in the `.gitmodules` file. See [git-submodule[1]](git-submodule) and [gitmodules[5]](gitmodules) for details. submodule.<name>.fetchRecurseSubmodules This option can be used to control recursive fetching of this submodule. It can be overridden by using the --[no-]recurse-submodules command-line option to "git fetch" and "git pull". This setting will override that from in the [gitmodules[5]](gitmodules) file. submodule.<name>.ignore Defines under what circumstances "git status" and the diff family show a submodule as modified. When set to "all", it will never be considered modified (but it will nonetheless show up in the output of status and commit when it has been staged), "dirty" will ignore all changes to the submodules work tree and takes only differences between the HEAD of the submodule and the commit recorded in the superproject into account. "untracked" will additionally let submodules with modified tracked files in their work tree show up. Using "none" (the default when this option is not set) also shows submodules that have untracked files in their work tree as changed. This setting overrides any setting made in .gitmodules for this submodule, both settings can be overridden on the command line by using the "--ignore-submodules" option. The `git submodule` commands are not affected by this setting. submodule.<name>.active Boolean value indicating if the submodule is of interest to git commands. This config option takes precedence over the submodule.active config option. See [gitsubmodules[7]](gitsubmodules) for details. submodule.active A repeated field which contains a pathspec used to match against a submodule’s path to determine if the submodule is of interest to git commands. See [gitsubmodules[7]](gitsubmodules) for details. submodule.recurse A boolean indicating if commands should enable the `--recurse-submodules` option by default. Defaults to false. When set to true, it can be deactivated via the `--no-recurse-submodules` option. Note that some Git commands lacking this option may call some of the above commands affected by `submodule.recurse`; for instance `git remote update` will call `git fetch` but does not have a `--no-recurse-submodules` option. For these commands a workaround is to temporarily change the configuration value by using `git -c submodule.recurse=0`. The following list shows the commands that accept `--recurse-submodules` and whether they are supported by this setting. * `checkout`, `fetch`, `grep`, `pull`, `push`, `read-tree`, `reset`, `restore` and `switch` are always supported. * `clone` and `ls-files` are not supported. * `branch` is supported only if `submodule.propagateBranches` is enabled submodule.propagateBranches [EXPERIMENTAL] A boolean that enables branching support when using `--recurse-submodules` or `submodule.recurse=true`. Enabling this will allow certain commands to accept `--recurse-submodules` and certain commands that already accept `--recurse-submodules` will now consider branches. Defaults to false. submodule.fetchJobs Specifies how many submodules are fetched/cloned at the same time. A positive integer allows up to that number of submodules fetched in parallel. A value of 0 will give some reasonable default. If unset, it defaults to 1. submodule.alternateLocation Specifies how the submodules obtain alternates when submodules are cloned. Possible values are `no`, `superproject`. By default `no` is assumed, which doesn’t add references. When the value is set to `superproject` the submodule to be cloned computes its alternates location relative to the superprojects alternate. submodule.alternateErrorStrategy Specifies how to treat errors with the alternates for a submodule as computed via `submodule.alternateLocation`. Possible values are `ignore`, `info`, `die`. Default is `die`. Note that if set to `ignore` or `info`, and if there is an error with the computed alternate, the clone proceeds as if no alternate was specified. tag.forceSignAnnotated A boolean to specify whether annotated tags created should be GPG signed. If `--annotate` is specified on the command line, it takes precedence over this option. tag.sort This variable controls the sort ordering of tags when displayed by [git-tag[1]](git-tag). Without the "--sort=<value>" option provided, the value of this variable will be used as the default. tag.gpgSign A boolean to specify whether all tags should be GPG signed. Use of this option when running in an automated script can result in a large number of tags being signed. It is therefore convenient to use an agent to avoid typing your gpg passphrase several times. Note that this option doesn’t affect tag signing behavior enabled by "-u <keyid>" or "--local-user=<keyid>" options. tar.umask This variable can be used to restrict the permission bits of tar archive entries. The default is 0002, which turns off the world write bit. The special value "user" indicates that the archiving user’s umask will be used instead. See umask(2) and [git-archive[1]](git-archive). Trace2 config settings are only read from the system and global config files; repository local and worktree config files and `-c` command line arguments are not respected. trace2.normalTarget This variable controls the normal target destination. It may be overridden by the `GIT_TRACE2` environment variable. The following table shows possible values. trace2.perfTarget This variable controls the performance target destination. It may be overridden by the `GIT_TRACE2_PERF` environment variable. The following table shows possible values. trace2.eventTarget This variable controls the event target destination. It may be overridden by the `GIT_TRACE2_EVENT` environment variable. The following table shows possible values. * `0` or `false` - Disables the target. * `1` or `true` - Writes to `STDERR`. * `[2-9]` - Writes to the already opened file descriptor. * `<absolute-pathname>` - Writes to the file in append mode. If the target already exists and is a directory, the traces will be written to files (one per process) underneath the given directory. * `af_unix:[<socket_type>:]<absolute-pathname>` - Write to a Unix DomainSocket (on platforms that support them). Socket type can be either `stream` or `dgram`; if omitted Git will try both. trace2.normalBrief Boolean. When true `time`, `filename`, and `line` fields are omitted from normal output. May be overridden by the `GIT_TRACE2_BRIEF` environment variable. Defaults to false. trace2.perfBrief Boolean. When true `time`, `filename`, and `line` fields are omitted from PERF output. May be overridden by the `GIT_TRACE2_PERF_BRIEF` environment variable. Defaults to false. trace2.eventBrief Boolean. When true `time`, `filename`, and `line` fields are omitted from event output. May be overridden by the `GIT_TRACE2_EVENT_BRIEF` environment variable. Defaults to false. trace2.eventNesting Integer. Specifies desired depth of nested regions in the event output. Regions deeper than this value will be omitted. May be overridden by the `GIT_TRACE2_EVENT_NESTING` environment variable. Defaults to 2. trace2.configParams A comma-separated list of patterns of "important" config settings that should be recorded in the trace2 output. For example, `core.*,remote.*.url` would cause the trace2 output to contain events listing each configured remote. May be overridden by the `GIT_TRACE2_CONFIG_PARAMS` environment variable. Unset by default. trace2.envVars A comma-separated list of "important" environment variables that should be recorded in the trace2 output. For example, `GIT_HTTP_USER_AGENT,GIT_CONFIG` would cause the trace2 output to contain events listing the overrides for HTTP user agent and the location of the Git configuration file (assuming any are set). May be overridden by the `GIT_TRACE2_ENV_VARS` environment variable. Unset by default. trace2.destinationDebug Boolean. When true Git will print error messages when a trace target destination cannot be opened for writing. By default, these errors are suppressed and tracing is silently disabled. May be overridden by the `GIT_TRACE2_DST_DEBUG` environment variable. trace2.maxFiles Integer. When writing trace files to a target directory, do not write additional traces if we would exceed this many files. Instead, write a sentinel file that will block further tracing to this directory. Defaults to 0, which disables this check. transfer.credentialsInUrl A configured URL can contain plaintext credentials in the form `<protocol>://<user>:<password>@<domain>/<path>`. You may want to warn or forbid the use of such configuration (in favor of using [git-credential[1]](git-credential)). This will be used on [git-clone[1]](git-clone), [git-fetch[1]](git-fetch), [git-push[1]](git-push), and any other direct use of the configured URL. Note that this is currently limited to detecting credentials in `remote.<name>.url` configuration, it won’t detect credentials in `remote.<name>.pushurl` configuration. You might want to enable this to prevent inadvertent credentials exposure, e.g. because: * The OS or system where you’re running git may not provide a way or otherwise allow you to configure the permissions of the configuration file where the username and/or password are stored. * Even if it does, having such data stored "at rest" might expose you in other ways, e.g. a backup process might copy the data to another system. * The git programs will pass the full URL to one another as arguments on the command-line, meaning the credentials will be exposed to other users on OS’s or systems that allow other users to see the full process list of other users. On linux the "hidepid" setting documented in procfs(5) allows for configuring this behavior. If such concerns don’t apply to you then you probably don’t need to be concerned about credentials exposure due to storing that sensitive data in git’s configuration files. If you do want to use this, set `transfer.credentialsInUrl` to one of these values: * `allow` (default): Git will proceed with its activity without warning. * `warn`: Git will write a warning message to `stderr` when parsing a URL with a plaintext credential. * `die`: Git will write a failure message to `stderr` when parsing a URL with a plaintext credential. transfer.fsckObjects When `fetch.fsckObjects` or `receive.fsckObjects` are not set, the value of this variable is used instead. Defaults to false. When set, the fetch or receive will abort in the case of a malformed object or a link to a nonexistent object. In addition, various other issues are checked for, including legacy issues (see `fsck.<msg-id>`), and potential security issues like the existence of a `.GIT` directory or a malicious `.gitmodules` file (see the release notes for v2.2.1 and v2.17.1 for details). Other sanity and security checks may be added in future releases. On the receiving side, failing fsckObjects will make those objects unreachable, see "QUARANTINE ENVIRONMENT" in [git-receive-pack[1]](git-receive-pack). On the fetch side, malformed objects will instead be left unreferenced in the repository. Due to the non-quarantine nature of the `fetch.fsckObjects` implementation it cannot be relied upon to leave the object store clean like `receive.fsckObjects` can. As objects are unpacked they’re written to the object store, so there can be cases where malicious objects get introduced even though the "fetch" failed, only to have a subsequent "fetch" succeed because only new incoming objects are checked, not those that have already been written to the object store. That difference in behavior should not be relied upon. In the future, such objects may be quarantined for "fetch" as well. For now, the paranoid need to find some way to emulate the quarantine environment if they’d like the same protection as "push". E.g. in the case of an internal mirror do the mirroring in two steps, one to fetch the untrusted objects, and then do a second "push" (which will use the quarantine) to another internal repo, and have internal clients consume this pushed-to repository, or embargo internal fetches and only allow them once a full "fsck" has run (and no new fetches have happened in the meantime). transfer.hideRefs String(s) `receive-pack` and `upload-pack` use to decide which refs to omit from their initial advertisements. Use more than one definition to specify multiple prefix strings. A ref that is under the hierarchies listed in the value of this variable is excluded, and is hidden when responding to `git push` or `git fetch`. See `receive.hideRefs` and `uploadpack.hideRefs` for program-specific versions of this config. You may also include a `!` in front of the ref name to negate the entry, explicitly exposing it, even if an earlier entry marked it as hidden. If you have multiple hideRefs values, later entries override earlier ones (and entries in more-specific config files override less-specific ones). If a namespace is in use, the namespace prefix is stripped from each reference before it is matched against `transfer.hiderefs` patterns. In order to match refs before stripping, add a `^` in front of the ref name. If you combine `!` and `^`, `!` must be specified first. For example, if `refs/heads/master` is specified in `transfer.hideRefs` and the current namespace is `foo`, then `refs/namespaces/foo/refs/heads/master` is omitted from the advertisements. If `uploadpack.allowRefInWant` is set, `upload-pack` will treat `want-ref refs/heads/master` in a protocol v2 `fetch` command as if `refs/namespaces/foo/refs/heads/master` did not exist. `receive-pack`, on the other hand, will still advertise the object id the ref is pointing to without mentioning its name (a so-called ".have" line). Even if you hide refs, a client may still be able to steal the target objects via the techniques described in the "SECURITY" section of the [gitnamespaces[7]](gitnamespaces) man page; it’s best to keep private data in a separate repository. transfer.unpackLimit When `fetch.unpackLimit` or `receive.unpackLimit` are not set, the value of this variable is used instead. The default value is 100. transfer.advertiseSID Boolean. When true, client and server processes will advertise their unique session IDs to their remote counterpart. Defaults to false. uploadarchive.allowUnreachable If true, allow clients to use `git archive --remote` to request any tree, whether reachable from the ref tips or not. See the discussion in the "SECURITY" section of [git-upload-archive[1]](git-upload-archive) for more details. Defaults to `false`. uploadpack.hideRefs This variable is the same as `transfer.hideRefs`, but applies only to `upload-pack` (and so affects only fetches, not pushes). An attempt to fetch a hidden ref by `git fetch` will fail. See also `uploadpack.allowTipSHA1InWant`. uploadpack.allowTipSHA1InWant When `uploadpack.hideRefs` is in effect, allow `upload-pack` to accept a fetch request that asks for an object at the tip of a hidden ref (by default, such a request is rejected). See also `uploadpack.hideRefs`. Even if this is false, a client may be able to steal objects via the techniques described in the "SECURITY" section of the [gitnamespaces[7]](gitnamespaces) man page; it’s best to keep private data in a separate repository. uploadpack.allowReachableSHA1InWant Allow `upload-pack` to accept a fetch request that asks for an object that is reachable from any ref tip. However, note that calculating object reachability is computationally expensive. Defaults to `false`. Even if this is false, a client may be able to steal objects via the techniques described in the "SECURITY" section of the [gitnamespaces[7]](gitnamespaces) man page; it’s best to keep private data in a separate repository. uploadpack.allowAnySHA1InWant Allow `upload-pack` to accept a fetch request that asks for any object at all. Defaults to `false`. uploadpack.keepAlive When `upload-pack` has started `pack-objects`, there may be a quiet period while `pack-objects` prepares the pack. Normally it would output progress information, but if `--quiet` was used for the fetch, `pack-objects` will output nothing at all until the pack data begins. Some clients and networks may consider the server to be hung and give up. Setting this option instructs `upload-pack` to send an empty keepalive packet every `uploadpack.keepAlive` seconds. Setting this option to 0 disables keepalive packets entirely. The default is 5 seconds. uploadpack.packObjectsHook If this option is set, when `upload-pack` would run `git pack-objects` to create a packfile for a client, it will run this shell command instead. The `pack-objects` command and arguments it `would` have run (including the `git pack-objects` at the beginning) are appended to the shell command. The stdin and stdout of the hook are treated as if `pack-objects` itself was run. I.e., `upload-pack` will feed input intended for `pack-objects` to the hook, and expects a completed packfile on stdout. Note that this configuration variable is only respected when it is specified in protected configuration (see [SCOPES](#SCOPES)). This is a safety measure against fetching from untrusted repositories. uploadpack.allowFilter If this option is set, `upload-pack` will support partial clone and partial fetch object filtering. uploadpackfilter.allow Provides a default value for unspecified object filters (see: the below configuration variable). If set to `true`, this will also enable all filters which get added in the future. Defaults to `true`. uploadpackfilter.<filter>.allow Explicitly allow or ban the object filter corresponding to `<filter>`, where `<filter>` may be one of: `blob:none`, `blob:limit`, `object:type`, `tree`, `sparse:oid`, or `combine`. If using combined filters, both `combine` and all of the nested filter kinds must be allowed. Defaults to `uploadpackfilter.allow`. uploadpackfilter.tree.maxDepth Only allow `--filter=tree:<n>` when `<n>` is no more than the value of `uploadpackfilter.tree.maxDepth`. If set, this also implies `uploadpackfilter.tree.allow=true`, unless this configuration variable had already been set. Has no effect if unset. uploadpack.allowRefInWant If this option is set, `upload-pack` will support the `ref-in-want` feature of the protocol version 2 `fetch` command. This feature is intended for the benefit of load-balanced servers which may not have the same view of what OIDs their refs point to due to replication delay. url.<base>.insteadOf Any URL that starts with this value will be rewritten to start, instead, with <base>. In cases where some site serves a large number of repositories, and serves them with multiple access methods, and some users need to use different access methods, this feature allows people to specify any of the equivalent URLs and have Git automatically rewrite the URL to the best alternative for the particular user, even for a never-before-seen repository on the site. When more than one insteadOf strings match a given URL, the longest match is used. Note that any protocol restrictions will be applied to the rewritten URL. If the rewrite changes the URL to use a custom protocol or remote helper, you may need to adjust the `protocol.*.allow` config to permit the request. In particular, protocols you expect to use for submodules must be set to `always` rather than the default of `user`. See the description of `protocol.allow` above. url.<base>.pushInsteadOf Any URL that starts with this value will not be pushed to; instead, it will be rewritten to start with <base>, and the resulting URL will be pushed to. In cases where some site serves a large number of repositories, and serves them with multiple access methods, some of which do not allow push, this feature allows people to specify a pull-only URL and have Git automatically use an appropriate URL to push, even for a never-before-seen repository on the site. When more than one pushInsteadOf strings match a given URL, the longest match is used. If a remote has an explicit pushurl, Git will ignore this setting for that remote. user.name user.email author.name author.email committer.name committer.email The `user.name` and `user.email` variables determine what ends up in the `author` and `committer` field of commit objects. If you need the `author` or `committer` to be different, the `author.name`, `author.email`, `committer.name` or `committer.email` variables can be set. Also, all of these can be overridden by the `GIT_AUTHOR_NAME`, `GIT_AUTHOR_EMAIL`, `GIT_COMMITTER_NAME`, `GIT_COMMITTER_EMAIL` and `EMAIL` environment variables. Note that the `name` forms of these variables conventionally refer to some form of a personal name. See [git-commit[1]](git-commit) and the environment variables section of [git[1]](git) for more information on these settings and the `credential.username` option if you’re looking for authentication credentials instead. user.useConfigOnly Instruct Git to avoid trying to guess defaults for `user.email` and `user.name`, and instead retrieve the values only from the configuration. For example, if you have multiple email addresses and would like to use a different one for each repository, then with this configuration option set to `true` in the global config along with a name, Git will prompt you to set up an email before making new commits in a newly cloned repository. Defaults to `false`. user.signingKey If [git-tag[1]](git-tag) or [git-commit[1]](git-commit) is not selecting the key you want it to automatically when creating a signed tag or commit, you can override the default selection with this variable. This option is passed unchanged to gpg’s --local-user parameter, so you may specify a key using any method that gpg supports. If gpg.format is set to `ssh` this can contain the path to either your private ssh key or the public key when ssh-agent is used. Alternatively it can contain a public key prefixed with `key::` directly (e.g.: "key::ssh-rsa XXXXXX identifier"). The private key needs to be available via ssh-agent. If not set git will call gpg.ssh.defaultKeyCommand (e.g.: "ssh-add -L") and try to use the first key available. For backward compatibility, a raw key which begins with "ssh-", such as "ssh-rsa XXXXXX identifier", is treated as "key::ssh-rsa XXXXXX identifier", but this form is deprecated; use the `key::` form instead. versionsort.prereleaseSuffix (deprecated) Deprecated alias for `versionsort.suffix`. Ignored if `versionsort.suffix` is set. versionsort.suffix Even when version sort is used in [git-tag[1]](git-tag), tagnames with the same base version but different suffixes are still sorted lexicographically, resulting e.g. in prerelease tags appearing after the main release (e.g. "1.0-rc1" after "1.0"). This variable can be specified to determine the sorting order of tags with different suffixes. By specifying a single suffix in this variable, any tagname containing that suffix will appear before the corresponding main release. E.g. if the variable is set to "-rc", then all "1.0-rcX" tags will appear before "1.0". If specified multiple times, once per suffix, then the order of suffixes in the configuration will determine the sorting order of tagnames with those suffixes. E.g. if "-pre" appears before "-rc" in the configuration, then all "1.0-preX" tags will be listed before any "1.0-rcX" tags. The placement of the main release tag relative to tags with various suffixes can be determined by specifying the empty suffix among those other suffixes. E.g. if the suffixes "-rc", "", "-ck" and "-bfs" appear in the configuration in this order, then all "v4.8-rcX" tags are listed first, followed by "v4.8", then "v4.8-ckX" and finally "v4.8-bfsX". If more than one suffixes match the same tagname, then that tagname will be sorted according to the suffix which starts at the earliest position in the tagname. If more than one different matching suffixes start at that earliest position, then that tagname will be sorted according to the longest of those suffixes. The sorting order between different suffixes is undefined if they are in multiple config files. web.browser Specify a web browser that may be used by some commands. Currently only [git-instaweb[1]](git-instaweb) and [git-help[1]](git-help) may use it. worktree.guessRemote If no branch is specified and neither `-b` nor `-B` nor `--detach` is used, then `git worktree add` defaults to creating a new branch from HEAD. If `worktree.guessRemote` is set to true, `worktree add` tries to find a remote-tracking branch whose name uniquely matches the new branch name. If such a branch exists, it is checked out and set as "upstream" for the new branch. If no such match can be found, it falls back to creating a new branch from the current HEAD. Bugs ---- When using the deprecated `[section.subsection]` syntax, changing a value will result in adding a multi-line key instead of a change, if the subsection is given with at least one uppercase character. For example when the config looks like ``` [section.subsection] key = value1 ``` and running `git config section.Subsection.key value2` will result in ``` [section.subsection] key = value1 key = value2 ```
programming_docs
git Reference Reference ========= Quick reference guides: [GitHub Cheat Sheet](https://github.github.com/training-kit/) | [Visual Git Cheat Sheet](https://ndpsoftware.com/git-cheatsheet.html) [Complete list of all commands](git#_git_commands) ### Setup and Config * <git> * [config](git-config) * [help](git-help) * [bugreport](git-bugreport) ### Getting and Creating Projects * [init](git-init) * [clone](git-clone) ### Basic Snapshotting * [add](git-add) * [status](git-status) * [diff](git-diff) * [commit](git-commit) * [notes](git-notes) * [restore](git-restore) * [reset](git-reset) * [rm](git-rm) * [mv](git-mv) ### Branching and Merging * [branch](git-branch) * [checkout](git-checkout) * [switch](git-switch) * [merge](git-merge) * [mergetool](git-mergetool) * [log](git-log) * [stash](git-stash) * [tag](git-tag) * [worktree](git-worktree) ### Sharing and Updating Projects * [fetch](git-fetch) * [pull](git-pull) * [push](git-push) * [remote](git-remote) * [submodule](git-submodule) ### Inspection and Comparison * [show](git-show) * [log](git-log) * [diff](git-diff) * [difftool](git-difftool) * [range-diff](git-range-diff) * [shortlog](git-shortlog) * [describe](git-describe) ### Patching * [apply](git-apply) * [cherry-pick](git-cherry-pick) * [diff](git-diff) * [rebase](git-rebase) * [revert](git-revert) ### Debugging * [bisect](git-bisect) * [blame](git-blame) * [grep](git-grep) ### Guides * <gitattributes> * [Command-line interface conventions](gitcli) * [Everyday Git](giteveryday) * [Frequently Asked Questions (FAQ)](gitfaq) * [Glossary](gitglossary) * [Hooks](githooks) * <gitignore> * <gitmodules> * [Revisions](gitrevisions) * [Submodules](gitsubmodules) * [Tutorial](gittutorial) * [Workflows](gitworkflows) * [All guides...](git#_guides) ### Email * [am](git-am) * [apply](git-apply) * [format-patch](git-format-patch) * [send-email](git-send-email) * [request-pull](git-request-pull) ### External Systems * [svn](git-svn) * [fast-import](git-fast-import) ### Administration * [clean](git-clean) * [gc](git-gc) * [fsck](git-fsck) * [reflog](git-reflog) * [filter-branch](git-filter-branch) * [instaweb](git-instaweb) * [archive](git-archive) * [bundle](git-bundle) ### Server Admin * [daemon](git-daemon) * [update-server-info](git-update-server-info) ### Plumbing Commands * [cat-file](git-cat-file) * [check-ignore](git-check-ignore) * [checkout-index](git-checkout-index) * [commit-tree](git-commit-tree) * [count-objects](git-count-objects) * [diff-index](git-diff-index) * [for-each-ref](git-for-each-ref) * [hash-object](git-hash-object) * [ls-files](git-ls-files) * [ls-tree](git-ls-tree) * [merge-base](git-merge-base) * [read-tree](git-read-tree) * [rev-list](git-rev-list) * [rev-parse](git-rev-parse) * [show-ref](git-show-ref) * [symbolic-ref](git-symbolic-ref) * [update-index](git-update-index) * [update-ref](git-update-ref) * [verify-pack](git-verify-pack) * [write-tree](git-write-tree) git git-mktag git-mktag ========= Name ---- git-mktag - Creates a tag object with extra validation Synopsis -------- ``` git mktag ``` Description ----------- Reads a tag contents on standard input and creates a tag object. The output is the new tag’s <object> identifier. This command is mostly equivalent to [git-hash-object[1]](git-hash-object) invoked with `-t tag -w --stdin`. I.e. both of these will create and write a tag found in `my-tag`: ``` git mktag <my-tag git hash-object -t tag -w --stdin <my-tag ``` The difference is that mktag will die before writing the tag if the tag doesn’t pass a [git-fsck[1]](git-fsck) check. The "fsck" check done mktag is stricter than what [git-fsck[1]](git-fsck) would run by default in that all `fsck.<msg-id>` messages are promoted from warnings to errors (so e.g. a missing "tagger" line is an error). Extra headers in the object are also an error under mktag, but ignored by [git-fsck[1]](git-fsck). This extra check can be turned off by setting the appropriate `fsck.<msg-id>` varible: ``` git -c fsck.extraHeaderEntry=ignore mktag <my-tag-with-headers ``` Options ------- --strict By default mktag turns on the equivalent of [git-fsck[1]](git-fsck) `--strict` mode. Use `--no-strict` to disable it. Tag format ---------- A tag signature file, to be fed to this command’s standard input, has a very simple fixed format: four lines of ``` object <hash> type <typename> tag <tagname> tagger <tagger> ``` followed by some `optional` free-form message (some tags created by older Git may not have `tagger` line). The message, when it exists, is separated by a blank line from the header. The message part may contain a signature that Git itself doesn’t care about, but that can be verified with gpg. git gitglossary gitglossary =========== Name ---- gitglossary - A Git Glossary Synopsis -------- \* Description ----------- alternate object database Via the alternates mechanism, a [repository](#def_repository) can inherit part of its [object database](#def_object_database) from another object database, which is called an "alternate". bare repository A bare repository is normally an appropriately named [directory](#def_directory) with a `.git` suffix that does not have a locally checked-out copy of any of the files under revision control. That is, all of the Git administrative and control files that would normally be present in the hidden `.git` sub-directory are directly present in the `repository.git` directory instead, and no other files are present and checked out. Usually publishers of public repositories make bare repositories available. blob object Untyped [object](#def_object), e.g. the contents of a file. branch A "branch" is a line of development. The most recent [commit](#def_commit) on a branch is referred to as the tip of that branch. The tip of the branch is [referenced](#def_ref) by a branch [head](#def_head), which moves forward as additional development is done on the branch. A single Git [repository](#def_repository) can track an arbitrary number of branches, but your [working tree](#def_working_tree) is associated with just one of them (the "current" or "checked out" branch), and [HEAD](#def_HEAD) points to that branch. cache Obsolete for: [index](#def_index). chain A list of objects, where each [object](#def_object) in the list contains a reference to its successor (for example, the successor of a [commit](#def_commit) could be one of its [parents](#def_parent)). changeset BitKeeper/cvsps speak for "[commit](#def_commit)". Since Git does not store changes, but states, it really does not make sense to use the term "changesets" with Git. checkout The action of updating all or part of the [working tree](#def_working_tree) with a [tree object](#def_tree_object) or [blob](#def_blob_object) from the [object database](#def_object_database), and updating the [index](#def_index) and [HEAD](#def_HEAD) if the whole working tree has been pointed at a new [branch](#def_branch). cherry-picking In [SCM](#def_SCM) jargon, "cherry pick" means to choose a subset of changes out of a series of changes (typically commits) and record them as a new series of changes on top of a different codebase. In Git, this is performed by the "git cherry-pick" command to extract the change introduced by an existing [commit](#def_commit) and to record it based on the tip of the current [branch](#def_branch) as a new commit. clean A [working tree](#def_working_tree) is clean, if it corresponds to the [revision](#def_revision) referenced by the current [head](#def_head). Also see "[dirty](#def_dirty)". commit As a noun: A single point in the Git history; the entire history of a project is represented as a set of interrelated commits. The word "commit" is often used by Git in the same places other revision control systems use the words "revision" or "version". Also used as a short hand for [commit object](#def_commit_object). As a verb: The action of storing a new snapshot of the project’s state in the Git history, by creating a new commit representing the current state of the [index](#def_index) and advancing [HEAD](#def_HEAD) to point at the new commit. commit graph concept, representations and usage A synonym for the [DAG](#def_DAG) structure formed by the commits in the object database, [referenced](#def_ref) by branch tips, using their [chain](#def_chain) of linked commits. This structure is the definitive commit graph. The graph can be represented in other ways, e.g. the ["commit-graph" file](#def_commit_graph_file). commit-graph file The "commit-graph" (normally hyphenated) file is a supplemental representation of the [commit graph](#def_commit_graph_general) which accelerates commit graph walks. The "commit-graph" file is stored either in the .git/objects/info directory or in the info directory of an alternate object database. commit object An [object](#def_object) which contains the information about a particular [revision](#def_revision), such as [parents](#def_parent), committer, author, date and the [tree object](#def_tree_object) which corresponds to the top [directory](#def_directory) of the stored revision. commit-ish (also committish) A [commit object](#def_commit_object) or an [object](#def_object) that can be recursively dereferenced to a commit object. The following are all commit-ishes: a commit object, a [tag object](#def_tag_object) that points to a commit object, a tag object that points to a tag object that points to a commit object, etc. core Git Fundamental data structures and utilities of Git. Exposes only limited source code management tools. DAG Directed acyclic graph. The [commit objects](#def_commit_object) form a directed acyclic graph, because they have parents (directed), and the graph of commit objects is acyclic (there is no [chain](#def_chain) which begins and ends with the same [object](#def_object)). dangling object An [unreachable object](#def_unreachable_object) which is not [reachable](#def_reachable) even from other unreachable objects; a dangling object has no references to it from any reference or [object](#def_object) in the [repository](#def_repository). detached HEAD Normally the [HEAD](#def_HEAD) stores the name of a [branch](#def_branch), and commands that operate on the history HEAD represents operate on the history leading to the tip of the branch the HEAD points at. However, Git also allows you to [check out](#def_checkout) an arbitrary [commit](#def_commit) that isn’t necessarily the tip of any particular branch. The HEAD in such a state is called "detached". Note that commands that operate on the history of the current branch (e.g. `git commit` to build a new history on top of it) still work while the HEAD is detached. They update the HEAD to point at the tip of the updated history without affecting any branch. Commands that update or inquire information `about` the current branch (e.g. `git branch --set-upstream-to` that sets what remote-tracking branch the current branch integrates with) obviously do not work, as there is no (real) current branch to ask about in this state. directory The list you get with "ls" :-) dirty A [working tree](#def_working_tree) is said to be "dirty" if it contains modifications which have not been [committed](#def_commit) to the current [branch](#def_branch). evil merge An evil merge is a [merge](#def_merge) that introduces changes that do not appear in any [parent](#def_parent). fast-forward A fast-forward is a special type of [merge](#def_merge) where you have a [revision](#def_revision) and you are "merging" another [branch](#def_branch)'s changes that happen to be a descendant of what you have. In such a case, you do not make a new [merge](#def_merge) [commit](#def_commit) but instead just update your branch to point at the same revision as the branch you are merging. This will happen frequently on a [remote-tracking branch](#def_remote_tracking_branch) of a remote [repository](#def_repository). fetch Fetching a [branch](#def_branch) means to get the branch’s [head ref](#def_head_ref) from a remote [repository](#def_repository), to find out which objects are missing from the local [object database](#def_object_database), and to get them, too. See also [git-fetch[1]](git-fetch). file system Linus Torvalds originally designed Git to be a user space file system, i.e. the infrastructure to hold files and directories. That ensured the efficiency and speed of Git. Git archive Synonym for [repository](#def_repository) (for arch people). gitfile A plain file `.git` at the root of a working tree that points at the directory that is the real repository. grafts Grafts enables two otherwise different lines of development to be joined together by recording fake ancestry information for commits. This way you can make Git pretend the set of [parents](#def_parent) a [commit](#def_commit) has is different from what was recorded when the commit was created. Configured via the `.git/info/grafts` file. Note that the grafts mechanism is outdated and can lead to problems transferring objects between repositories; see [git-replace[1]](git-replace) for a more flexible and robust system to do the same thing. hash In Git’s context, synonym for [object name](#def_object_name). head A [named reference](#def_ref) to the [commit](#def_commit) at the tip of a [branch](#def_branch). Heads are stored in a file in `$GIT_DIR/refs/heads/` directory, except when using packed refs. (See [git-pack-refs[1]](git-pack-refs).) HEAD The current [branch](#def_branch). In more detail: Your [working tree](#def_working_tree) is normally derived from the state of the tree referred to by HEAD. HEAD is a reference to one of the [heads](#def_head) in your repository, except when using a [detached HEAD](#def_detached_HEAD), in which case it directly references an arbitrary commit. head ref A synonym for [head](#def_head). hook During the normal execution of several Git commands, call-outs are made to optional scripts that allow a developer to add functionality or checking. Typically, the hooks allow for a command to be pre-verified and potentially aborted, and allow for a post-notification after the operation is done. The hook scripts are found in the `$GIT_DIR/hooks/` directory, and are enabled by simply removing the `.sample` suffix from the filename. In earlier versions of Git you had to make them executable. index A collection of files with stat information, whose contents are stored as objects. The index is a stored version of your [working tree](#def_working_tree). Truth be told, it can also contain a second, and even a third version of a working tree, which are used when [merging](#def_merge). index entry The information regarding a particular file, stored in the [index](#def_index). An index entry can be unmerged, if a [merge](#def_merge) was started, but not yet finished (i.e. if the index contains multiple versions of that file). master The default development [branch](#def_branch). Whenever you create a Git [repository](#def_repository), a branch named "master" is created, and becomes the active branch. In most cases, this contains the local development, though that is purely by convention and is not required. merge As a verb: To bring the contents of another [branch](#def_branch) (possibly from an external [repository](#def_repository)) into the current branch. In the case where the merged-in branch is from a different repository, this is done by first [fetching](#def_fetch) the remote branch and then merging the result into the current branch. This combination of fetch and merge operations is called a [pull](#def_pull). Merging is performed by an automatic process that identifies changes made since the branches diverged, and then applies all those changes together. In cases where changes conflict, manual intervention may be required to complete the merge. As a noun: unless it is a [fast-forward](#def_fast_forward), a successful merge results in the creation of a new [commit](#def_commit) representing the result of the merge, and having as [parents](#def_parent) the tips of the merged [branches](#def_branch). This commit is referred to as a "merge commit", or sometimes just a "merge". object The unit of storage in Git. It is uniquely identified by the [SHA-1](#def_SHA1) of its contents. Consequently, an object cannot be changed. object database Stores a set of "objects", and an individual [object](#def_object) is identified by its [object name](#def_object_name). The objects usually live in `$GIT_DIR/objects/`. object identifier (oid) Synonym for [object name](#def_object_name). object name The unique identifier of an [object](#def_object). The object name is usually represented by a 40 character hexadecimal string. Also colloquially called [SHA-1](#def_SHA1). object type One of the identifiers "[commit](#def_commit_object)", "[tree](#def_tree_object)", "[tag](#def_tag_object)" or "[blob](#def_blob_object)" describing the type of an [object](#def_object). octopus To [merge](#def_merge) more than two [branches](#def_branch). origin The default upstream [repository](#def_repository). Most projects have at least one upstream project which they track. By default `origin` is used for that purpose. New upstream updates will be fetched into [remote-tracking branches](#def_remote_tracking_branch) named origin/name-of-upstream-branch, which you can see using `git branch -r`. overlay Only update and add files to the working directory, but don’t delete them, similar to how `cp -R` would update the contents in the destination directory. This is the default mode in a [checkout](#def_checkout) when checking out files from the [index](#def_index) or a [tree-ish](#def_tree-ish). In contrast, no-overlay mode also deletes tracked files not present in the source, similar to `rsync --delete`. pack A set of objects which have been compressed into one file (to save space or to transmit them efficiently). pack index The list of identifiers, and other information, of the objects in a [pack](#def_pack), to assist in efficiently accessing the contents of a pack. pathspec Pattern used to limit paths in Git commands. Pathspecs are used on the command line of "git ls-files", "git ls-tree", "git add", "git grep", "git diff", "git checkout", and many other commands to limit the scope of operations to some subset of the tree or working tree. See the documentation of each command for whether paths are relative to the current directory or toplevel. The pathspec syntax is as follows: * any path matches itself * the pathspec up to the last slash represents a directory prefix. The scope of that pathspec is limited to that subtree. * the rest of the pathspec is a pattern for the remainder of the pathname. Paths relative to the directory prefix will be matched against that pattern using fnmatch(3); in particular, `*` and `?` `can` match directory separators. For example, Documentation/\*.jpg will match all .jpg files in the Documentation subtree, including Documentation/chapter\_1/figure\_1.jpg. A pathspec that begins with a colon `:` has special meaning. In the short form, the leading colon `:` is followed by zero or more "magic signature" letters (which optionally is terminated by another colon `:`), and the remainder is the pattern to match against the path. The "magic signature" consists of ASCII symbols that are neither alphanumeric, glob, regex special characters nor colon. The optional colon that terminates the "magic signature" can be omitted if the pattern begins with a character that does not belong to "magic signature" symbol set and is not a colon. In the long form, the leading colon `:` is followed by an open parenthesis `(`, a comma-separated list of zero or more "magic words", and a close parentheses `)`, and the remainder is the pattern to match against the path. A pathspec with only a colon means "there is no pathspec". This form should not be combined with other pathspec. top The magic word `top` (magic signature: `/`) makes the pattern match from the root of the working tree, even when you are running the command from inside a subdirectory. literal Wildcards in the pattern such as `*` or `?` are treated as literal characters. icase Case insensitive match. glob Git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM\_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, "Documentation/\*.html" matches "Documentation/git.html" but not "Documentation/ppc/ppc.html" or "tools/perf/Documentation/perf.html". Two consecutive asterisks ("`**`") in patterns matched against full pathname may have special meaning: * A leading "`**`" followed by a slash means match in all directories. For example, "`**/foo`" matches file or directory "`foo`" anywhere, the same as pattern "`foo`". "`**/foo/bar`" matches file or directory "`bar`" anywhere that is directly under directory "`foo`". * A trailing "`/**`" matches everything inside. For example, "`abc/**`" matches all files inside directory "abc", relative to the location of the `.gitignore` file, with infinite depth. * A slash followed by two consecutive asterisks then a slash matches zero or more directories. For example, "`a/**/b`" matches "`a/b`", "`a/x/b`", "`a/x/y/b`" and so on. * Other consecutive asterisks are considered invalid. Glob magic is incompatible with literal magic. attr After `attr:` comes a space separated list of "attribute requirements", all of which must be met in order for the path to be considered a match; this is in addition to the usual non-magic pathspec pattern matching. See [gitattributes[5]](gitattributes). Each of the attribute requirements for the path takes one of these forms: * "`ATTR`" requires that the attribute `ATTR` be set. * "`-ATTR`" requires that the attribute `ATTR` be unset. * "`ATTR=VALUE`" requires that the attribute `ATTR` be set to the string `VALUE`. * "`!ATTR`" requires that the attribute `ATTR` be unspecified. Note that when matching against a tree object, attributes are still obtained from working tree, not from the given tree object. exclude After a path matches any non-exclude pathspec, it will be run through all exclude pathspecs (magic signature: `!` or its synonym `^`). If it matches, the path is ignored. When there is no non-exclude pathspec, the exclusion is applied to the result set as if invoked without any pathspec. parent A [commit object](#def_commit_object) contains a (possibly empty) list of the logical predecessor(s) in the line of development, i.e. its parents. pickaxe The term [pickaxe](#def_pickaxe) refers to an option to the diffcore routines that help select changes that add or delete a given text string. With the `--pickaxe-all` option, it can be used to view the full [changeset](#def_changeset) that introduced or removed, say, a particular line of text. See [git-diff[1]](git-diff). plumbing Cute name for [core Git](#def_core_git). porcelain Cute name for programs and program suites depending on [core Git](#def_core_git), presenting a high level access to core Git. Porcelains expose more of a [SCM](#def_SCM) interface than the [plumbing](#def_plumbing). per-worktree ref Refs that are per-[worktree](#def_worktree), rather than global. This is presently only [HEAD](#def_HEAD) and any refs that start with `refs/bisect/`, but might later include other unusual refs. pseudoref Pseudorefs are a class of files under `$GIT_DIR` which behave like refs for the purposes of rev-parse, but which are treated specially by git. Pseudorefs both have names that are all-caps, and always start with a line consisting of a [SHA-1](#def_SHA1) followed by whitespace. So, HEAD is not a pseudoref, because it is sometimes a symbolic ref. They might optionally contain some additional data. `MERGE_HEAD` and `CHERRY_PICK_HEAD` are examples. Unlike [per-worktree refs](#def_per_worktree_ref), these files cannot be symbolic refs, and never have reflogs. They also cannot be updated through the normal ref update machinery. Instead, they are updated by directly writing to the files. However, they can be read as if they were refs, so `git rev-parse MERGE_HEAD` will work. pull Pulling a [branch](#def_branch) means to [fetch](#def_fetch) it and [merge](#def_merge) it. See also [git-pull[1]](git-pull). push Pushing a [branch](#def_branch) means to get the branch’s [head ref](#def_head_ref) from a remote [repository](#def_repository), find out if it is an ancestor to the branch’s local head ref, and in that case, putting all objects, which are [reachable](#def_reachable) from the local head ref, and which are missing from the remote repository, into the remote [object database](#def_object_database), and updating the remote head ref. If the remote [head](#def_head) is not an ancestor to the local head, the push fails. reachable All of the ancestors of a given [commit](#def_commit) are said to be "reachable" from that commit. More generally, one [object](#def_object) is reachable from another if we can reach the one from the other by a [chain](#def_chain) that follows [tags](#def_tag) to whatever they tag, [commits](#def_commit_object) to their parents or trees, and [trees](#def_tree_object) to the trees or [blobs](#def_blob_object) that they contain. reachability bitmaps Reachability bitmaps store information about the [reachability](#def_reachable) of a selected set of commits in a packfile, or a multi-pack index (MIDX), to speed up object search. The bitmaps are stored in a ".bitmap" file. A repository may have at most one bitmap file in use. The bitmap file may belong to either one pack, or the repository’s multi-pack index (if it exists). rebase To reapply a series of changes from a [branch](#def_branch) to a different base, and reset the [head](#def_head) of that branch to the result. ref A name that begins with `refs/` (e.g. `refs/heads/master`) that points to an [object name](#def_object_name) or another ref (the latter is called a [symbolic ref](#def_symref)). For convenience, a ref can sometimes be abbreviated when used as an argument to a Git command; see [gitrevisions[7]](gitrevisions) for details. Refs are stored in the [repository](#def_repository). The ref namespace is hierarchical. Different subhierarchies are used for different purposes (e.g. the `refs/heads/` hierarchy is used to represent local branches). There are a few special-purpose refs that do not begin with `refs/`. The most notable example is `HEAD`. reflog A reflog shows the local "history" of a ref. In other words, it can tell you what the 3rd last revision in `this` repository was, and what was the current state in `this` repository, yesterday 9:14pm. See [git-reflog[1]](git-reflog) for details. refspec A "refspec" is used by [fetch](#def_fetch) and [push](#def_push) to describe the mapping between remote [ref](#def_ref) and local ref. remote repository A [repository](#def_repository) which is used to track the same project but resides somewhere else. To communicate with remotes, see [fetch](#def_fetch) or [push](#def_push). remote-tracking branch A [ref](#def_ref) that is used to follow changes from another [repository](#def_repository). It typically looks like `refs/remotes/foo/bar` (indicating that it tracks a branch named `bar` in a remote named `foo`), and matches the right-hand-side of a configured fetch [refspec](#def_refspec). A remote-tracking branch should not contain direct modifications or have local commits made to it. repository A collection of [refs](#def_ref) together with an [object database](#def_object_database) containing all objects which are [reachable](#def_reachable) from the refs, possibly accompanied by meta data from one or more [porcelains](#def_porcelain). A repository can share an object database with other repositories via [alternates mechanism](#def_alternate_object_database). resolve The action of fixing up manually what a failed automatic [merge](#def_merge) left behind. revision Synonym for [commit](#def_commit) (the noun). rewind To throw away part of the development, i.e. to assign the [head](#def_head) to an earlier [revision](#def_revision). SCM Source code management (tool). SHA-1 "Secure Hash Algorithm 1"; a cryptographic hash function. In the context of Git used as a synonym for [object name](#def_object_name). shallow clone Mostly a synonym to [shallow repository](#def_shallow_repository) but the phrase makes it more explicit that it was created by running `git clone --depth=...` command. shallow repository A shallow [repository](#def_repository) has an incomplete history some of whose [commits](#def_commit) have [parents](#def_parent) cauterized away (in other words, Git is told to pretend that these commits do not have the parents, even though they are recorded in the [commit object](#def_commit_object)). This is sometimes useful when you are interested only in the recent history of a project even though the real history recorded in the upstream is much larger. A shallow repository is created by giving the `--depth` option to [git-clone[1]](git-clone), and its history can be later deepened with [git-fetch[1]](git-fetch). stash entry An [object](#def_object) used to temporarily store the contents of a [dirty](#def_dirty) working directory and the index for future reuse. submodule A [repository](#def_repository) that holds the history of a separate project inside another repository (the latter of which is called [superproject](#def_superproject)). superproject A [repository](#def_repository) that references repositories of other projects in its working tree as [submodules](#def_submodule). The superproject knows about the names of (but does not hold copies of) commit objects of the contained submodules. symref Symbolic reference: instead of containing the [SHA-1](#def_SHA1) id itself, it is of the format `ref: refs/some/thing` and when referenced, it recursively dereferences to this reference. `[HEAD](#def_HEAD)` is a prime example of a symref. Symbolic references are manipulated with the [git-symbolic-ref[1]](git-symbolic-ref) command. tag A [ref](#def_ref) under `refs/tags/` namespace that points to an object of an arbitrary type (typically a tag points to either a [tag](#def_tag_object) or a [commit object](#def_commit_object)). In contrast to a [head](#def_head), a tag is not updated by the `commit` command. A Git tag has nothing to do with a Lisp tag (which would be called an [object type](#def_object_type) in Git’s context). A tag is most typically used to mark a particular point in the commit ancestry [chain](#def_chain). tag object An [object](#def_object) containing a [ref](#def_ref) pointing to another object, which can contain a message just like a [commit object](#def_commit_object). It can also contain a (PGP) signature, in which case it is called a "signed tag object". topic branch A regular Git [branch](#def_branch) that is used by a developer to identify a conceptual line of development. Since branches are very easy and inexpensive, it is often desirable to have several small branches that each contain very well defined concepts or small incremental yet related changes. tree Either a [working tree](#def_working_tree), or a [tree object](#def_tree_object) together with the dependent [blob](#def_blob_object) and tree objects (i.e. a stored representation of a working tree). tree object An [object](#def_object) containing a list of file names and modes along with refs to the associated blob and/or tree objects. A [tree](#def_tree) is equivalent to a [directory](#def_directory). tree-ish (also treeish) A [tree object](#def_tree_object) or an [object](#def_object) that can be recursively dereferenced to a tree object. Dereferencing a [commit object](#def_commit_object) yields the tree object corresponding to the [revision](#def_revision)'s top [directory](#def_directory). The following are all tree-ishes: a [commit-ish](#def_commit-ish), a tree object, a [tag object](#def_tag_object) that points to a tree object, a tag object that points to a tag object that points to a tree object, etc. unmerged index An [index](#def_index) which contains unmerged [index entries](#def_index_entry). unreachable object An [object](#def_object) which is not [reachable](#def_reachable) from a [branch](#def_branch), [tag](#def_tag), or any other reference. upstream branch The default [branch](#def_branch) that is merged into the branch in question (or the branch in question is rebased onto). It is configured via branch.<name>.remote and branch.<name>.merge. If the upstream branch of `A` is `origin/B` sometimes we say "`A` is tracking `origin/B`". working tree The tree of actual checked out files. The working tree normally contains the contents of the [HEAD](#def_HEAD) commit’s tree, plus any local changes that you have made but not yet committed. worktree A repository can have zero (i.e. bare repository) or one or more worktrees attached to it. One "worktree" consists of a "working tree" and repository metadata, most of which are shared among other worktrees of a single repository, and some of which are maintained separately per worktree (e.g. the index, HEAD and pseudorefs like MERGE\_HEAD, per-worktree refs and per-worktree configuration file). See also -------- [gittutorial[7]](gittutorial), [gittutorial-2[7]](gittutorial-2), [gitcvs-migration[7]](gitcvs-migration), [giteveryday[7]](giteveryday), [The Git User’s Manual](user-manual)
programming_docs
git git-prune-packed git-prune-packed ================ Name ---- git-prune-packed - Remove extra objects that are already in pack files Synopsis -------- ``` git prune-packed [-n | --dry-run] [-q | --quiet] ``` Description ----------- This program searches the `$GIT_OBJECT_DIRECTORY` for all objects that currently exist in a pack file as well as the independent object directories. All such extra objects are removed. A pack is a collection of objects, individually compressed, with delta compression applied, stored in a single file, with an associated index file. Packs are used to reduce the load on mirror systems, backup engines, disk storage, etc. Options ------- -n --dry-run Don’t actually remove any objects, only show those that would have been removed. -q --quiet Squelch the progress indicator. See also -------- [git-pack-objects[1]](git-pack-objects) [git-repack[1]](git-repack) git git-worktree git-worktree ============ Name ---- git-worktree - Manage multiple working trees Synopsis -------- ``` git worktree add [-f] [--detach] [--checkout] [--lock [--reason <string>]] [-b <new-branch>] <path> [<commit-ish>] git worktree list [-v | --porcelain [-z]] git worktree lock [--reason <string>] <worktree> git worktree move <worktree> <new-path> git worktree prune [-n] [-v] [--expire <expire>] git worktree remove [-f] <worktree> git worktree repair [<path>…​] git worktree unlock <worktree> ``` Description ----------- Manage multiple working trees attached to the same repository. A git repository can support multiple working trees, allowing you to check out more than one branch at a time. With `git worktree add` a new working tree is associated with the repository, along with additional metadata that differentiates that working tree from others in the same repository. The working tree, along with this metadata, is called a "worktree". This new worktree is called a "linked worktree" as opposed to the "main worktree" prepared by [git-init[1]](git-init) or [git-clone[1]](git-clone). A repository has one main worktree (if it’s not a bare repository) and zero or more linked worktrees. When you are done with a linked worktree, remove it with `git worktree remove`. In its simplest form, `git worktree add <path>` automatically creates a new branch whose name is the final component of `<path>`, which is convenient if you plan to work on a new topic. For instance, `git worktree add ../hotfix` creates new branch `hotfix` and checks it out at path `../hotfix`. To instead work on an existing branch in a new worktree, use `git worktree add <path> <branch>`. On the other hand, if you just plan to make some experimental changes or do testing without disturbing existing development, it is often convenient to create a `throwaway` worktree not associated with any branch. For instance, `git worktree add -d <path>` creates a new worktree with a detached `HEAD` at the same commit as the current branch. If a working tree is deleted without using `git worktree remove`, then its associated administrative files, which reside in the repository (see "DETAILS" below), will eventually be removed automatically (see `gc.worktreePruneExpire` in [git-config[1]](git-config)), or you can run `git worktree prune` in the main or any linked worktree to clean up any stale administrative files. If the working tree for a linked worktree is stored on a portable device or network share which is not always mounted, you can prevent its administrative files from being pruned by issuing the `git worktree lock` command, optionally specifying `--reason` to explain why the worktree is locked. Commands -------- add <path> [<commit-ish>] Create a worktree at `<path>` and checkout `<commit-ish>` into it. The new worktree is linked to the current repository, sharing everything except per-worktree files such as `HEAD`, `index`, etc. As a convenience, `<commit-ish>` may be a bare "`-`", which is synonymous with `@{-1}`. If `<commit-ish>` is a branch name (call it `<branch>`) and is not found, and neither `-b` nor `-B` nor `--detach` are used, but there does exist a tracking branch in exactly one remote (call it `<remote>`) with a matching name, treat as equivalent to: ``` $ git worktree add --track -b <branch> <path> <remote>/<branch> ``` If the branch exists in multiple remotes and one of them is named by the `checkout.defaultRemote` configuration variable, we’ll use that one for the purposes of disambiguation, even if the `<branch>` isn’t unique across all remotes. Set it to e.g. `checkout.defaultRemote=origin` to always checkout remote branches from there if `<branch>` is ambiguous but exists on the `origin` remote. See also `checkout.defaultRemote` in [git-config[1]](git-config). If `<commit-ish>` is omitted and neither `-b` nor `-B` nor `--detach` used, then, as a convenience, the new worktree is associated with a branch (call it `<branch>`) named after `$(basename <path>)`. If `<branch>` doesn’t exist, a new branch based on `HEAD` is automatically created as if `-b <branch>` was given. If `<branch>` does exist, it will be checked out in the new worktree, if it’s not checked out anywhere else, otherwise the command will refuse to create the worktree (unless `--force` is used). list List details of each worktree. The main worktree is listed first, followed by each of the linked worktrees. The output details include whether the worktree is bare, the revision currently checked out, the branch currently checked out (or "detached HEAD" if none), "locked" if the worktree is locked, "prunable" if the worktree can be pruned by the `prune` command. lock If a worktree is on a portable device or network share which is not always mounted, lock it to prevent its administrative files from being pruned automatically. This also prevents it from being moved or deleted. Optionally, specify a reason for the lock with `--reason`. move Move a worktree to a new location. Note that the main worktree or linked worktrees containing submodules cannot be moved with this command. (The `git worktree repair` command, however, can reestablish the connection with linked worktrees if you move the main worktree manually.) prune Prune worktree information in `$GIT_DIR/worktrees`. remove Remove a worktree. Only clean worktrees (no untracked files and no modification in tracked files) can be removed. Unclean worktrees or ones with submodules can be removed with `--force`. The main worktree cannot be removed. repair [<path>…​] Repair worktree administrative files, if possible, if they have become corrupted or outdated due to external factors. For instance, if the main worktree (or bare repository) is moved, linked worktrees will be unable to locate it. Running `repair` in the main worktree will reestablish the connection from linked worktrees back to the main worktree. Similarly, if the working tree for a linked worktree is moved without using `git worktree move`, the main worktree (or bare repository) will be unable to locate it. Running `repair` within the recently-moved worktree will reestablish the connection. If multiple linked worktrees are moved, running `repair` from any worktree with each tree’s new `<path>` as an argument, will reestablish the connection to all the specified paths. If both the main worktree and linked worktrees have been moved manually, then running `repair` in the main worktree and specifying the new `<path>` of each linked worktree will reestablish all connections in both directions. unlock Unlock a worktree, allowing it to be pruned, moved or deleted. Options ------- -f --force By default, `add` refuses to create a new worktree when `<commit-ish>` is a branch name and is already checked out by another worktree, or if `<path>` is already assigned to some worktree but is missing (for instance, if `<path>` was deleted manually). This option overrides these safeguards. To add a missing but locked worktree path, specify `--force` twice. `move` refuses to move a locked worktree unless `--force` is specified twice. If the destination is already assigned to some other worktree but is missing (for instance, if `<new-path>` was deleted manually), then `--force` allows the move to proceed; use `--force` twice if the destination is locked. `remove` refuses to remove an unclean worktree unless `--force` is used. To remove a locked worktree, specify `--force` twice. -b <new-branch> -B <new-branch> With `add`, create a new branch named `<new-branch>` starting at `<commit-ish>`, and check out `<new-branch>` into the new worktree. If `<commit-ish>` is omitted, it defaults to `HEAD`. By default, `-b` refuses to create a new branch if it already exists. `-B` overrides this safeguard, resetting `<new-branch>` to `<commit-ish>`. -d --detach With `add`, detach `HEAD` in the new worktree. See "DETACHED HEAD" in [git-checkout[1]](git-checkout). --[no-]checkout By default, `add` checks out `<commit-ish>`, however, `--no-checkout` can be used to suppress checkout in order to make customizations, such as configuring sparse-checkout. See "Sparse checkout" in [git-read-tree[1]](git-read-tree). --[no-]guess-remote With `worktree add <path>`, without `<commit-ish>`, instead of creating a new branch from `HEAD`, if there exists a tracking branch in exactly one remote matching the basename of `<path>`, base the new branch on the remote-tracking branch, and mark the remote-tracking branch as "upstream" from the new branch. This can also be set up as the default behaviour by using the `worktree.guessRemote` config option. --[no-]track When creating a new branch, if `<commit-ish>` is a branch, mark it as "upstream" from the new branch. This is the default if `<commit-ish>` is a remote-tracking branch. See `--track` in [git-branch[1]](git-branch) for details. --lock Keep the worktree locked after creation. This is the equivalent of `git worktree lock` after `git worktree add`, but without a race condition. -n --dry-run With `prune`, do not remove anything; just report what it would remove. --porcelain With `list`, output in an easy-to-parse format for scripts. This format will remain stable across Git versions and regardless of user configuration. It is recommended to combine this with `-z`. See below for details. -z Terminate each line with a NUL rather than a newline when `--porcelain` is specified with `list`. This makes it possible to parse the output when a worktree path contains a newline character. -q --quiet With `add`, suppress feedback messages. -v --verbose With `prune`, report all removals. With `list`, output additional information about worktrees (see below). --expire <time> With `prune`, only expire unused worktrees older than `<time>`. With `list`, annotate missing worktrees as prunable if they are older than `<time>`. --reason <string> With `lock` or with `add --lock`, an explanation why the worktree is locked. <worktree> Worktrees can be identified by path, either relative or absolute. If the last path components in the worktree’s path is unique among worktrees, it can be used to identify a worktree. For example if you only have two worktrees, at `/abc/def/ghi` and `/abc/def/ggg`, then `ghi` or `def/ghi` is enough to point to the former worktree. Refs ---- When using multiple worktrees, some refs are shared between all worktrees, but others are specific to an individual worktree. One example is `HEAD`, which is different for each worktree. This section is about the sharing rules and how to access refs of one worktree from another. In general, all pseudo refs are per-worktree and all refs starting with `refs/` are shared. Pseudo refs are ones like `HEAD` which are directly under `$GIT_DIR` instead of inside `$GIT_DIR/refs`. There are exceptions, however: refs inside `refs/bisect` and `refs/worktree` are not shared. Refs that are per-worktree can still be accessed from another worktree via two special paths, `main-worktree` and `worktrees`. The former gives access to per-worktree refs of the main worktree, while the latter to all linked worktrees. For example, `main-worktree/HEAD` or `main-worktree/refs/bisect/good` resolve to the same value as the main worktree’s `HEAD` and `refs/bisect/good` respectively. Similarly, `worktrees/foo/HEAD` or `worktrees/bar/refs/bisect/bad` are the same as `$GIT_COMMON_DIR/worktrees/foo/HEAD` and `$GIT_COMMON_DIR/worktrees/bar/refs/bisect/bad`. To access refs, it’s best not to look inside `$GIT_DIR` directly. Instead use commands such as [git-rev-parse[1]](git-rev-parse) or [git-update-ref[1]](git-update-ref) which will handle refs correctly. Configuration file ------------------ By default, the repository `config` file is shared across all worktrees. If the config variables `core.bare` or `core.worktree` are present in the common config file and `extensions.worktreeConfig` is disabled, then they will be applied to the main worktree only. In order to have worktree-specific configuration, you can turn on the `worktreeConfig` extension, e.g.: ``` $ git config extensions.worktreeConfig true ``` In this mode, specific configuration stays in the path pointed by `git rev-parse --git-path config.worktree`. You can add or update configuration in this file with `git config --worktree`. Older Git versions will refuse to access repositories with this extension. Note that in this file, the exception for `core.bare` and `core.worktree` is gone. If they exist in `$GIT_DIR/config`, you must move them to the `config.worktree` of the main worktree. You may also take this opportunity to review and move other configuration that you do not want to share to all worktrees: * `core.worktree` should never be shared. * `core.bare` should not be shared if the value is `core.bare=true`. * `core.sparseCheckout` should not be shared, unless you are sure you always use sparse checkout for all worktrees. See the documentation of `extensions.worktreeConfig` in [git-config[1]](git-config) for more details. Details ------- Each linked worktree has a private sub-directory in the repository’s `$GIT_DIR/worktrees` directory. The private sub-directory’s name is usually the base name of the linked worktree’s path, possibly appended with a number to make it unique. For example, when `$GIT_DIR=/path/main/.git` the command `git worktree add /path/other/test-next next` creates the linked worktree in `/path/other/test-next` and also creates a `$GIT_DIR/worktrees/test-next` directory (or `$GIT_DIR/worktrees/test-next1` if `test-next` is already taken). Within a linked worktree, `$GIT_DIR` is set to point to this private directory (e.g. `/path/main/.git/worktrees/test-next` in the example) and `$GIT_COMMON_DIR` is set to point back to the main worktree’s `$GIT_DIR` (e.g. `/path/main/.git`). These settings are made in a `.git` file located at the top directory of the linked worktree. Path resolution via `git rev-parse --git-path` uses either `$GIT_DIR` or `$GIT_COMMON_DIR` depending on the path. For example, in the linked worktree `git rev-parse --git-path HEAD` returns `/path/main/.git/worktrees/test-next/HEAD` (not `/path/other/test-next/.git/HEAD` or `/path/main/.git/HEAD`) while `git rev-parse --git-path refs/heads/master` uses `$GIT_COMMON_DIR` and returns `/path/main/.git/refs/heads/master`, since refs are shared across all worktrees, except `refs/bisect` and `refs/worktree`. See [gitrepository-layout[5]](gitrepository-layout) for more information. The rule of thumb is do not make any assumption about whether a path belongs to `$GIT_DIR` or `$GIT_COMMON_DIR` when you need to directly access something inside `$GIT_DIR`. Use `git rev-parse --git-path` to get the final path. If you manually move a linked worktree, you need to update the `gitdir` file in the entry’s directory. For example, if a linked worktree is moved to `/newpath/test-next` and its `.git` file points to `/path/main/.git/worktrees/test-next`, then update `/path/main/.git/worktrees/test-next/gitdir` to reference `/newpath/test-next` instead. Better yet, run `git worktree repair` to reestablish the connection automatically. To prevent a `$GIT_DIR/worktrees` entry from being pruned (which can be useful in some situations, such as when the entry’s worktree is stored on a portable device), use the `git worktree lock` command, which adds a file named `locked` to the entry’s directory. The file contains the reason in plain text. For example, if a linked worktree’s `.git` file points to `/path/main/.git/worktrees/test-next` then a file named `/path/main/.git/worktrees/test-next/locked` will prevent the `test-next` entry from being pruned. See [gitrepository-layout[5]](gitrepository-layout) for details. When `extensions.worktreeConfig` is enabled, the config file `.git/worktrees/<id>/config.worktree` is read after `.git/config` is. List output format ------------------ The `worktree list` command has two output formats. The default format shows the details on a single line with columns. For example: ``` $ git worktree list /path/to/bare-source (bare) /path/to/linked-worktree abcd1234 [master] /path/to/other-linked-worktree 1234abc (detached HEAD) ``` The command also shows annotations for each worktree, according to its state. These annotations are: * `locked`, if the worktree is locked. * `prunable`, if the worktree can be pruned via `git worktree prune`. ``` $ git worktree list /path/to/linked-worktree abcd1234 [master] /path/to/locked-worktree acbd5678 (brancha) locked /path/to/prunable-worktree 5678abc (detached HEAD) prunable ``` For these annotations, a reason might also be available and this can be seen using the verbose mode. The annotation is then moved to the next line indented followed by the additional information. ``` $ git worktree list --verbose /path/to/linked-worktree abcd1234 [master] /path/to/locked-worktree-no-reason abcd5678 (detached HEAD) locked /path/to/locked-worktree-with-reason 1234abcd (brancha) locked: worktree path is mounted on a portable device /path/to/prunable-worktree 5678abc1 (detached HEAD) prunable: gitdir file points to non-existent location ``` Note that the annotation is moved to the next line if the additional information is available, otherwise it stays on the same line as the worktree itself. ### Porcelain Format The porcelain format has a line per attribute. If `-z` is given then the lines are terminated with NUL rather than a newline. Attributes are listed with a label and value separated by a single space. Boolean attributes (like `bare` and `detached`) are listed as a label only, and are present only if the value is true. Some attributes (like `locked`) can be listed as a label only or with a value depending upon whether a reason is available. The first attribute of a worktree is always `worktree`, an empty line indicates the end of the record. For example: ``` $ git worktree list --porcelain worktree /path/to/bare-source bare worktree /path/to/linked-worktree HEAD abcd1234abcd1234abcd1234abcd1234abcd1234 branch refs/heads/master worktree /path/to/other-linked-worktree HEAD 1234abc1234abc1234abc1234abc1234abc1234a detached worktree /path/to/linked-worktree-locked-no-reason HEAD 5678abc5678abc5678abc5678abc5678abc5678c branch refs/heads/locked-no-reason locked worktree /path/to/linked-worktree-locked-with-reason HEAD 3456def3456def3456def3456def3456def3456b branch refs/heads/locked-with-reason locked reason why is locked worktree /path/to/linked-worktree-prunable HEAD 1233def1234def1234def1234def1234def1234b detached prunable gitdir file points to non-existent location ``` Unless `-z` is used any "unusual" characters in the lock reason such as newlines are escaped and the entire reason is quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). For Example: ``` $ git worktree list --porcelain ... locked "reason\nwhy is locked" ... ``` Examples -------- You are in the middle of a refactoring session and your boss comes in and demands that you fix something immediately. You might typically use [git-stash[1]](git-stash) to store your changes away temporarily, however, your working tree is in such a state of disarray (with new, moved, and removed files, and other bits and pieces strewn around) that you don’t want to risk disturbing any of it. Instead, you create a temporary linked worktree to make the emergency fix, remove it when done, and then resume your earlier refactoring session. ``` $ git worktree add -b emergency-fix ../temp master $ pushd ../temp # ... hack hack hack ... $ git commit -a -m 'emergency fix for boss' $ popd $ git worktree remove ../temp ``` Bugs ---- Multiple checkout in general is still experimental, and the support for submodules is incomplete. It is NOT recommended to make multiple checkouts of a superproject.
programming_docs
git git-rebase git-rebase ========== Name ---- git-rebase - Reapply commits on top of another base tip Synopsis -------- ``` git rebase [-i | --interactive] [<options>] [--exec <cmd>] [--onto <newbase> | --keep-base] [<upstream> [<branch>]] git rebase [-i | --interactive] [<options>] [--exec <cmd>] [--onto <newbase>] --root [<branch>] git rebase (--continue | --skip | --abort | --quit | --edit-todo | --show-current-patch) ``` Description ----------- If `<branch>` is specified, `git rebase` will perform an automatic `git switch <branch>` before doing anything else. Otherwise it remains on the current branch. If `<upstream>` is not specified, the upstream configured in `branch.<name>.remote` and `branch.<name>.merge` options will be used (see [git-config[1]](git-config) for details) and the `--fork-point` option is assumed. If you are currently not on any branch or if the current branch does not have a configured upstream, the rebase will abort. All changes made by commits in the current branch but that are not in `<upstream>` are saved to a temporary area. This is the same set of commits that would be shown by `git log <upstream>..HEAD`; or by `git log 'fork_point'..HEAD`, if `--fork-point` is active (see the description on `--fork-point` below); or by `git log HEAD`, if the `--root` option is specified. The current branch is reset to `<upstream>` or `<newbase>` if the `--onto` option was supplied. This has the exact same effect as `git reset --hard <upstream>` (or `<newbase>`). `ORIG_HEAD` is set to point at the tip of the branch before the reset. The commits that were previously saved into the temporary area are then reapplied to the current branch, one by one, in order. Note that any commits in `HEAD` which introduce the same textual changes as a commit in `HEAD..<upstream>` are omitted (i.e., a patch already accepted upstream with a different commit message or timestamp will be skipped). It is possible that a merge failure will prevent this process from being completely automatic. You will have to resolve any such merge failure and run `git rebase --continue`. Another option is to bypass the commit that caused the merge failure with `git rebase --skip`. To check out the original `<branch>` and remove the `.git/rebase-apply` working files, use the command `git rebase --abort` instead. Assume the following history exists and the current branch is "topic": ``` A---B---C topic / D---E---F---G master ``` From this point, the result of either of the following commands: ``` git rebase master git rebase master topic ``` would be: ``` A'--B'--C' topic / D---E---F---G master ``` **NOTE:** The latter form is just a short-hand of `git checkout topic` followed by `git rebase master`. When rebase exits `topic` will remain the checked-out branch. If the upstream branch already contains a change you have made (e.g., because you mailed a patch which was applied upstream), then that commit will be skipped and warnings will be issued (if the `merge` backend is used). For example, running `git rebase master` on the following history (in which `A'` and `A` introduce the same set of changes, but have different committer information): ``` A---B---C topic / D---E---A'---F master ``` will result in: ``` B'---C' topic / D---E---A'---F master ``` Here is how you would transplant a topic branch based on one branch to another, to pretend that you forked the topic branch from the latter branch, using `rebase --onto`. First let’s assume your `topic` is based on branch `next`. For example, a feature developed in `topic` depends on some functionality which is found in `next`. ``` o---o---o---o---o master \ o---o---o---o---o next \ o---o---o topic ``` We want to make `topic` forked from branch `master`; for example, because the functionality on which `topic` depends was merged into the more stable `master` branch. We want our tree to look like this: ``` o---o---o---o---o master | \ | o'--o'--o' topic \ o---o---o---o---o next ``` We can get this using the following command: ``` git rebase --onto master next topic ``` Another example of --onto option is to rebase part of a branch. If we have the following situation: ``` H---I---J topicB / E---F---G topicA / A---B---C---D master ``` then the command ``` git rebase --onto master topicA topicB ``` would result in: ``` H'--I'--J' topicB / | E---F---G topicA |/ A---B---C---D master ``` This is useful when topicB does not depend on topicA. A range of commits could also be removed with rebase. If we have the following situation: ``` E---F---G---H---I---J topicA ``` then the command ``` git rebase --onto topicA~5 topicA~3 topicA ``` would result in the removal of commits F and G: ``` E---H'---I'---J' topicA ``` This is useful if F and G were flawed in some way, or should not be part of topicA. Note that the argument to `--onto` and the `<upstream>` parameter can be any valid commit-ish. In case of conflict, `git rebase` will stop at the first problematic commit and leave conflict markers in the tree. You can use `git diff` to locate the markers (<<<<<<) and make edits to resolve the conflict. For each file you edit, you need to tell Git that the conflict has been resolved, typically this would be done with ``` git add <filename> ``` After resolving the conflict manually and updating the index with the desired resolution, you can continue the rebasing process with ``` git rebase --continue ``` Alternatively, you can undo the `git rebase` with ``` git rebase --abort ``` Options ------- --onto <newbase> Starting point at which to create the new commits. If the `--onto` option is not specified, the starting point is `<upstream>`. May be any valid commit, and not just an existing branch name. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. --keep-base Set the starting point at which to create the new commits to the merge base of `<upstream>` and `<branch>`. Running `git rebase --keep-base <upstream> <branch>` is equivalent to running `git rebase --reapply-cherry-picks --no-fork-point --onto <upstream>...<branch> <upstream> <branch>`. This option is useful in the case where one is developing a feature on top of an upstream branch. While the feature is being worked on, the upstream branch may advance and it may not be the best idea to keep rebasing on top of the upstream but to keep the base commit as-is. As the base commit is unchanged this option implies `--reapply-cherry-picks` to avoid losing commits. Although both this option and `--fork-point` find the merge base between `<upstream>` and `<branch>`, this option uses the merge base as the `starting point` on which new commits will be created, whereas `--fork-point` uses the merge base to determine the `set of commits` which will be rebased. See also INCOMPATIBLE OPTIONS below. <upstream> Upstream branch to compare against. May be any valid commit, not just an existing branch name. Defaults to the configured upstream for the current branch. <branch> Working branch; defaults to `HEAD`. --continue Restart the rebasing process after having resolved a merge conflict. --abort Abort the rebase operation and reset HEAD to the original branch. If `<branch>` was provided when the rebase operation was started, then `HEAD` will be reset to `<branch>`. Otherwise `HEAD` will be reset to where it was when the rebase operation was started. --quit Abort the rebase operation but `HEAD` is not reset back to the original branch. The index and working tree are also left unchanged as a result. If a temporary stash entry was created using `--autostash`, it will be saved to the stash list. --apply Use applying strategies to rebase (calling `git-am` internally). This option may become a no-op in the future once the merge backend handles everything the apply one does. See also INCOMPATIBLE OPTIONS below. --empty={drop,keep,ask} How to handle commits that are not empty to start and are not clean cherry-picks of any upstream commit, but which become empty after rebasing (because they contain a subset of already upstream changes). With drop (the default), commits that become empty are dropped. With keep, such commits are kept. With ask (implied by `--interactive`), the rebase will halt when an empty commit is applied allowing you to choose whether to drop it, edit files more, or just commit the empty changes. Other options, like `--exec`, will use the default of drop unless `-i`/`--interactive` is explicitly specified. Note that commits which start empty are kept (unless `--no-keep-empty` is specified), and commits which are clean cherry-picks (as determined by `git log --cherry-mark ...`) are detected and dropped as a preliminary step (unless `--reapply-cherry-picks` or `--keep-base` is passed). See also INCOMPATIBLE OPTIONS below. --no-keep-empty --keep-empty Do not keep commits that start empty before the rebase (i.e. that do not change anything from its parent) in the result. The default is to keep commits which start empty, since creating such commits requires passing the `--allow-empty` override flag to `git commit`, signifying that a user is very intentionally creating such a commit and thus wants to keep it. Usage of this flag will probably be rare, since you can get rid of commits that start empty by just firing up an interactive rebase and removing the lines corresponding to the commits you don’t want. This flag exists as a convenient shortcut, such as for cases where external tools generate many empty commits and you want them all removed. For commits which do not start empty but become empty after rebasing, see the `--empty` flag. See also INCOMPATIBLE OPTIONS below. --reapply-cherry-picks --no-reapply-cherry-picks Reapply all clean cherry-picks of any upstream commit instead of preemptively dropping them. (If these commits then become empty after rebasing, because they contain a subset of already upstream changes, the behavior towards them is controlled by the `--empty` flag.) In the absence of `--keep-base` (or if `--no-reapply-cherry-picks` is given), these commits will be automatically dropped. Because this necessitates reading all upstream commits, this can be expensive in repositories with a large number of upstream commits that need to be read. When using the `merge` backend, warnings will be issued for each dropped commit (unless `--quiet` is given). Advice will also be issued unless `advice.skippedCherryPicks` is set to false (see [git-config[1]](git-config)). `--reapply-cherry-picks` allows rebase to forgo reading all upstream commits, potentially improving performance. See also INCOMPATIBLE OPTIONS below. --allow-empty-message No-op. Rebasing commits with an empty message used to fail and this option would override that behavior, allowing commits with empty messages to be rebased. Now commits with an empty message do not cause rebasing to halt. See also INCOMPATIBLE OPTIONS below. --skip Restart the rebasing process by skipping the current patch. --edit-todo Edit the todo list during an interactive rebase. --show-current-patch Show the current patch in an interactive rebase or when rebase is stopped because of conflicts. This is the equivalent of `git show REBASE_HEAD`. -m --merge Using merging strategies to rebase (default). Note that a rebase merge works by replaying each commit from the working branch on top of the `<upstream>` branch. Because of this, when a merge conflict happens, the side reported as `ours` is the so-far rebased series, starting with `<upstream>`, and `theirs` is the working branch. In other words, the sides are swapped. See also INCOMPATIBLE OPTIONS below. -s <strategy> --strategy=<strategy> Use the given merge strategy, instead of the default `ort`. This implies `--merge`. Because `git rebase` replays each commit from the working branch on top of the `<upstream>` branch using the given strategy, using the `ours` strategy simply empties all patches from the `<branch>`, which makes little sense. See also INCOMPATIBLE OPTIONS below. -X <strategy-option> --strategy-option=<strategy-option> Pass the <strategy-option> through to the merge strategy. This implies `--merge` and, if no strategy has been specified, `-s ort`. Note the reversal of `ours` and `theirs` as noted above for the `-m` option. See also INCOMPATIBLE OPTIONS below. --rerere-autoupdate --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. `--no-rerere-autoupdate` is a good way to double-check what `rerere` did and catch potential mismerges, before committing the result to the index with a separate `git add`. -S[<keyid>] --gpg-sign[=<keyid>] --no-gpg-sign GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. `--no-gpg-sign` is useful to countermand both `commit.gpgSign` configuration variable, and earlier `--gpg-sign`. -q --quiet Be quiet. Implies `--no-stat`. -v --verbose Be verbose. Implies `--stat`. --stat Show a diffstat of what changed upstream since the last rebase. The diffstat is also controlled by the configuration option rebase.stat. -n --no-stat Do not show a diffstat as part of the rebase process. --no-verify This option bypasses the pre-rebase hook. See also [githooks[5]](githooks). --verify Allows the pre-rebase hook to run, which is the default. This option can be used to override `--no-verify`. See also [githooks[5]](githooks). -C<n> Ensure at least `<n>` lines of surrounding context match before and after each change. When fewer lines of surrounding context exist they all must match. By default no context is ever ignored. Implies `--apply`. See also INCOMPATIBLE OPTIONS below. --no-ff --force-rebase -f Individually replay all rebased commits instead of fast-forwarding over the unchanged ones. This ensures that the entire history of the rebased branch is composed of new commits. You may find this helpful after reverting a topic branch merge, as this option recreates the topic branch with fresh commits so it can be remerged successfully without needing to "revert the reversion" (see the [revert-a-faulty-merge How-To](https://git-scm.com/docs/howto/revert-a-faulty-merge) for details). --fork-point --no-fork-point Use reflog to find a better common ancestor between `<upstream>` and `<branch>` when calculating which commits have been introduced by `<branch>`. When `--fork-point` is active, `fork_point` will be used instead of `<upstream>` to calculate the set of commits to rebase, where `fork_point` is the result of `git merge-base --fork-point <upstream> <branch>` command (see [git-merge-base[1]](git-merge-base)). If `fork_point` ends up being empty, the `<upstream>` will be used as a fallback. If `<upstream>` or `--keep-base` is given on the command line, then the default is `--no-fork-point`, otherwise the default is `--fork-point`. See also `rebase.forkpoint` in [git-config[1]](git-config). If your branch was based on `<upstream>` but `<upstream>` was rewound and your branch contains commits which were dropped, this option can be used with `--keep-base` in order to drop those commits from your branch. See also INCOMPATIBLE OPTIONS below. --ignore-whitespace Ignore whitespace differences when trying to reconcile differences. Currently, each backend implements an approximation of this behavior: apply backend When applying a patch, ignore changes in whitespace in context lines. Unfortunately, this means that if the "old" lines being replaced by the patch differ only in whitespace from the existing file, you will get a merge conflict instead of a successful patch application. merge backend Treat lines with only whitespace changes as unchanged when merging. Unfortunately, this means that any patch hunks that were intended to modify whitespace and nothing else will be dropped, even if the other side had no changes that conflicted. --whitespace=<option> This flag is passed to the `git apply` program (see [git-apply[1]](git-apply)) that applies the patch. Implies `--apply`. See also INCOMPATIBLE OPTIONS below. --committer-date-is-author-date Instead of using the current time as the committer date, use the author date of the commit being rebased as the committer date. This option implies `--force-rebase`. --ignore-date --reset-author-date Instead of using the author date of the original commit, use the current time as the author date of the rebased commit. This option implies `--force-rebase`. See also INCOMPATIBLE OPTIONS below. --signoff Add a `Signed-off-by` trailer to all the rebased commits. Note that if `--interactive` is given then only commits marked to be picked, edited or reworded will have the trailer added. See also INCOMPATIBLE OPTIONS below. -i --interactive Make a list of the commits which are about to be rebased. Let the user edit that list before rebasing. This mode can also be used to split commits (see SPLITTING COMMITS below). The commit list format can be changed by setting the configuration option rebase.instructionFormat. A customized instruction format will automatically have the long commit hash prepended to the format. See also INCOMPATIBLE OPTIONS below. -r --rebase-merges[=(rebase-cousins|no-rebase-cousins)] By default, a rebase will simply drop merge commits from the todo list, and put the rebased commits into a single, linear branch. With `--rebase-merges`, the rebase will instead try to preserve the branching structure within the commits that are to be rebased, by recreating the merge commits. Any resolved merge conflicts or manual amendments in these merge commits will have to be resolved/re-applied manually. By default, or when `no-rebase-cousins` was specified, commits which do not have `<upstream>` as direct ancestor will keep their original branch point, i.e. commits that would be excluded by [git-log[1]](git-log)'s `--ancestry-path` option will keep their original ancestry by default. If the `rebase-cousins` mode is turned on, such commits are instead rebased onto `<upstream>` (or `<onto>`, if specified). It is currently only possible to recreate the merge commits using the `ort` merge strategy; different merge strategies can be used only via explicit `exec git merge -s <strategy> [...]` commands. See also REBASING MERGES and INCOMPATIBLE OPTIONS below. -x <cmd> --exec <cmd> Append "exec <cmd>" after each line creating a commit in the final history. `<cmd>` will be interpreted as one or more shell commands. Any command that fails will interrupt the rebase, with exit code 1. You may execute several commands by either using one instance of `--exec` with several commands: ``` git rebase -i --exec "cmd1 && cmd2 && ..." ``` or by giving more than one `--exec`: ``` git rebase -i --exec "cmd1" --exec "cmd2" --exec ... ``` If `--autosquash` is used, `exec` lines will not be appended for the intermediate commits, and will only appear at the end of each squash/fixup series. This uses the `--interactive` machinery internally, but it can be run without an explicit `--interactive`. See also INCOMPATIBLE OPTIONS below. --root Rebase all commits reachable from `<branch>`, instead of limiting them with an `<upstream>`. This allows you to rebase the root commit(s) on a branch. When used with `--onto`, it will skip changes already contained in `<newbase>` (instead of `<upstream>`) whereas without `--onto` it will operate on every change. See also INCOMPATIBLE OPTIONS below. --autosquash --no-autosquash When the commit log message begins with "squash! …​" or "fixup! …​" or "amend! …​", and there is already a commit in the todo list that matches the same `...`, automatically modify the todo list of `rebase -i`, so that the commit marked for squashing comes right after the commit to be modified, and change the action of the moved commit from `pick` to `squash` or `fixup` or `fixup -C` respectively. A commit matches the `...` if the commit subject matches, or if the `...` refers to the commit’s hash. As a fall-back, partial matches of the commit subject work, too. The recommended way to create fixup/amend/squash commits is by using the `--fixup`, `--fixup=amend:` or `--fixup=reword:` and `--squash` options respectively of [git-commit[1]](git-commit). If the `--autosquash` option is enabled by default using the configuration variable `rebase.autoSquash`, this option can be used to override and disable this setting. See also INCOMPATIBLE OPTIONS below. --autostash --no-autostash Automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run rebase on a dirty worktree. However, use with care: the final stash application after a successful rebase might result in non-trivial conflicts. --reschedule-failed-exec --no-reschedule-failed-exec Automatically reschedule `exec` commands that failed. This only makes sense in interactive mode (or when an `--exec` option was provided). Even though this option applies once a rebase is started, it’s set for the whole rebase at the start based on either the `rebase.rescheduleFailedExec` configuration (see [git-config[1]](git-config) or "CONFIGURATION" below) or whether this option is provided. Otherwise an explicit `--no-reschedule-failed-exec` at the start would be overridden by the presence of `rebase.rescheduleFailedExec=true` configuration. --update-refs --no-update-refs Automatically force-update any branches that point to commits that are being rebased. Any branches that are checked out in a worktree are not updated in this way. If the configuration variable `rebase.updateRefs` is set, then this option can be used to override and disable this setting. Incompatible options -------------------- The following options: * --apply * --whitespace * -C are incompatible with the following options: * --merge * --strategy * --strategy-option * --allow-empty-message * --[no-]autosquash * --rebase-merges * --interactive * --exec * --no-keep-empty * --empty= * --reapply-cherry-picks * --edit-todo * --update-refs * --root when used in combination with --onto In addition, the following pairs of options are incompatible: * --keep-base and --onto * --keep-base and --root * --fork-point and --root Behavioral differences ---------------------- `git rebase` has two primary backends: `apply` and `merge`. (The `apply` backend used to be known as the `am` backend, but the name led to confusion as it looks like a verb instead of a noun. Also, the `merge` backend used to be known as the interactive backend, but it is now used for non-interactive cases as well. Both were renamed based on lower-level functionality that underpinned each.) There are some subtle differences in how these two backends behave: ### Empty commits The `apply` backend unfortunately drops intentionally empty commits, i.e. commits that started empty, though these are rare in practice. It also drops commits that become empty and has no option for controlling this behavior. The `merge` backend keeps intentionally empty commits by default (though with `-i` they are marked as empty in the todo list editor, or they can be dropped automatically with `--no-keep-empty`). Similar to the apply backend, by default the merge backend drops commits that become empty unless `-i`/`--interactive` is specified (in which case it stops and asks the user what to do). The merge backend also has an `--empty={drop,keep,ask}` option for changing the behavior of handling commits that become empty. ### Directory rename detection Due to the lack of accurate tree information (arising from constructing fake ancestors with the limited information available in patches), directory rename detection is disabled in the `apply` backend. Disabled directory rename detection means that if one side of history renames a directory and the other adds new files to the old directory, then the new files will be left behind in the old directory without any warning at the time of rebasing that you may want to move these files into the new directory. Directory rename detection works with the `merge` backend to provide you warnings in such cases. ### Context The `apply` backend works by creating a sequence of patches (by calling `format-patch` internally), and then applying the patches in sequence (calling `am` internally). Patches are composed of multiple hunks, each with line numbers, a context region, and the actual changes. The line numbers have to be taken with some fuzz, since the other side will likely have inserted or deleted lines earlier in the file. The context region is meant to help find how to adjust the line numbers in order to apply the changes to the right lines. However, if multiple areas of the code have the same surrounding lines of context, the wrong one can be picked. There are real-world cases where this has caused commits to be reapplied incorrectly with no conflicts reported. Setting `diff.context` to a larger value may prevent such types of problems, but increases the chance of spurious conflicts (since it will require more lines of matching context to apply). The `merge` backend works with a full copy of each relevant file, insulating it from these types of problems. ### Labelling of conflicts markers When there are content conflicts, the merge machinery tries to annotate each side’s conflict markers with the commits where the content came from. Since the `apply` backend drops the original information about the rebased commits and their parents (and instead generates new fake commits based off limited information in the generated patches), those commits cannot be identified; instead it has to fall back to a commit summary. Also, when `merge.conflictStyle` is set to `diff3` or `zdiff3`, the `apply` backend will use "constructed merge base" to label the content from the merge base, and thus provide no information about the merge base commit whatsoever. The `merge` backend works with the full commits on both sides of history and thus has no such limitations. ### Hooks The `apply` backend has not traditionally called the post-commit hook, while the `merge` backend has. Both have called the post-checkout hook, though the `merge` backend has squelched its output. Further, both backends only call the post-checkout hook with the starting point commit of the rebase, not the intermediate commits nor the final commit. In each case, the calling of these hooks was by accident of implementation rather than by design (both backends were originally implemented as shell scripts and happened to invoke other commands like `git checkout` or `git commit` that would call the hooks). Both backends should have the same behavior, though it is not entirely clear which, if any, is correct. We will likely make rebase stop calling either of these hooks in the future. ### Interruptability The `apply` backend has safety problems with an ill-timed interrupt; if the user presses Ctrl-C at the wrong time to try to abort the rebase, the rebase can enter a state where it cannot be aborted with a subsequent `git rebase --abort`. The `merge` backend does not appear to suffer from the same shortcoming. (See <https://lore.kernel.org/git/[email protected]/> for details.) ### Commit Rewording When a conflict occurs while rebasing, rebase stops and asks the user to resolve. Since the user may need to make notable changes while resolving conflicts, after conflicts are resolved and the user has run `git rebase --continue`, the rebase should open an editor and ask the user to update the commit message. The `merge` backend does this, while the `apply` backend blindly applies the original commit message. ### Miscellaneous differences There are a few more behavioral differences that most folks would probably consider inconsequential but which are mentioned for completeness: * Reflog: The two backends will use different wording when describing the changes made in the reflog, though both will make use of the word "rebase". * Progress, informational, and error messages: The two backends provide slightly different progress and informational messages. Also, the apply backend writes error messages (such as "Your files would be overwritten…​") to stdout, while the merge backend writes them to stderr. * State directories: The two backends keep their state in different directories under `.git/` Merge strategies ---------------- The merge mechanism (`git merge` and `git pull` commands) allows the backend `merge strategies` to be chosen with `-s` option. Some strategies can also take their own options, which can be passed by giving `-X<option>` arguments to `git merge` and/or `git pull`. ort This is the default merge strategy when pulling or merging one branch. This strategy can only resolve two heads using a 3-way merge algorithm. When there is more than one common ancestor that can be used for 3-way merge, it creates a merged tree of the common ancestors and uses that as the reference tree for the 3-way merge. This has been reported to result in fewer merge conflicts without causing mismerges by tests done on actual merge commits taken from Linux 2.6 kernel development history. Additionally this strategy can detect and handle merges involving renames. It does not make use of detected copies. The name for this algorithm is an acronym ("Ostensibly Recursive’s Twin") and came from the fact that it was written as a replacement for the previous default algorithm, `recursive`. The `ort` strategy can take the following options: ours This option forces conflicting hunks to be auto-resolved cleanly by favoring `our` version. Changes from the other tree that do not conflict with our side are reflected in the merge result. For a binary file, the entire contents are taken from our side. This should not be confused with the `ours` merge strategy, which does not even look at what the other tree contains at all. It discards everything the other tree did, declaring `our` history contains all that happened in it. theirs This is the opposite of `ours`; note that, unlike `ours`, there is no `theirs` merge strategy to confuse this merge option with. ignore-space-change ignore-all-space ignore-space-at-eol ignore-cr-at-eol Treats lines with the indicated type of whitespace change as unchanged for the sake of a three-way merge. Whitespace changes mixed with other changes to a line are not ignored. See also [git-diff[1]](git-diff) `-b`, `-w`, `--ignore-space-at-eol`, and `--ignore-cr-at-eol`. * If `their` version only introduces whitespace changes to a line, `our` version is used; * If `our` version introduces whitespace changes but `their` version includes a substantial change, `their` version is used; * Otherwise, the merge proceeds in the usual way. renormalize This runs a virtual check-out and check-in of all three stages of a file when resolving a three-way merge. This option is meant to be used when merging branches with different clean filters or end-of-line normalization rules. See "Merging branches with differing checkin/checkout attributes" in [gitattributes[5]](gitattributes) for details. no-renormalize Disables the `renormalize` option. This overrides the `merge.renormalize` configuration variable. find-renames[=<n>] Turn on rename detection, optionally setting the similarity threshold. This is the default. This overrides the `merge.renames` configuration variable. See also [git-diff[1]](git-diff) `--find-renames`. rename-threshold=<n> Deprecated synonym for `find-renames=<n>`. subtree[=<path>] This option is a more advanced form of `subtree` strategy, where the strategy makes a guess on how two trees must be shifted to match with each other when merging. Instead, the specified path is prefixed (or stripped from the beginning) to make the shape of two trees to match. recursive This can only resolve two heads using a 3-way merge algorithm. When there is more than one common ancestor that can be used for 3-way merge, it creates a merged tree of the common ancestors and uses that as the reference tree for the 3-way merge. This has been reported to result in fewer merge conflicts without causing mismerges by tests done on actual merge commits taken from Linux 2.6 kernel development history. Additionally this can detect and handle merges involving renames. It does not make use of detected copies. This was the default strategy for resolving two heads from Git v0.99.9k until v2.33.0. The `recursive` strategy takes the same options as `ort`. However, there are three additional options that `ort` ignores (not documented above) that are potentially useful with the `recursive` strategy: patience Deprecated synonym for `diff-algorithm=patience`. diff-algorithm=[patience|minimal|histogram|myers] Use a different diff algorithm while merging, which can help avoid mismerges that occur due to unimportant matching lines (such as braces from distinct functions). See also [git-diff[1]](git-diff) `--diff-algorithm`. Note that `ort` specifically uses `diff-algorithm=histogram`, while `recursive` defaults to the `diff.algorithm` config setting. no-renames Turn off rename detection. This overrides the `merge.renames` configuration variable. See also [git-diff[1]](git-diff) `--no-renames`. resolve This can only resolve two heads (i.e. the current branch and another branch you pulled from) using a 3-way merge algorithm. It tries to carefully detect criss-cross merge ambiguities. It does not handle renames. octopus This resolves cases with more than two heads, but refuses to do a complex merge that needs manual resolution. It is primarily meant to be used for bundling topic branch heads together. This is the default merge strategy when pulling or merging more than one branch. ours This resolves any number of heads, but the resulting tree of the merge is always that of the current branch head, effectively ignoring all changes from all other branches. It is meant to be used to supersede old development history of side branches. Note that this is different from the -Xours option to the `recursive` merge strategy. subtree This is a modified `ort` strategy. When merging trees A and B, if B corresponds to a subtree of A, B is first adjusted to match the tree structure of A, instead of reading the trees at the same level. This adjustment is also done to the common ancestor tree. With the strategies that use 3-way merge (including the default, `ort`), if a change is made on both branches, but later reverted on one of the branches, that change will be present in the merged result; some people find this behavior confusing. It occurs because only the heads and the merge base are considered when performing a merge, not the individual commits. The merge algorithm therefore considers the reverted change as no change at all, and substitutes the changed version instead. Notes ----- You should understand the implications of using `git rebase` on a repository that you share. See also RECOVERING FROM UPSTREAM REBASE below. When the rebase is run, it will first execute a `pre-rebase` hook if one exists. You can use this hook to do sanity checks and reject the rebase if it isn’t appropriate. Please see the template `pre-rebase` hook script for an example. Upon completion, `<branch>` will be the current branch. Interactive mode ---------------- Rebasing interactively means that you have a chance to edit the commits which are rebased. You can reorder the commits, and you can remove them (weeding out bad or otherwise unwanted patches). The interactive mode is meant for this type of workflow: 1. have a wonderful idea 2. hack on the code 3. prepare a series for submission 4. submit where point 2. consists of several instances of a) regular use 1. finish something worthy of a commit 2. commit b) independent fixup 1. realize that something does not work 2. fix that 3. commit it Sometimes the thing fixed in b.2. cannot be amended to the not-quite perfect commit it fixes, because that commit is buried deeply in a patch series. That is exactly what interactive rebase is for: use it after plenty of "a"s and "b"s, by rearranging and editing commits, and squashing multiple commits into one. Start it with the last commit you want to retain as-is: ``` git rebase -i <after-this-commit> ``` An editor will be fired up with all the commits in your current branch (ignoring merge commits), which come after the given commit. You can reorder the commits in this list to your heart’s content, and you can remove them. The list looks more or less like this: ``` pick deadbee The oneline of this commit pick fa1afe1 The oneline of the next commit ... ``` The oneline descriptions are purely for your pleasure; `git rebase` will not look at them but at the commit names ("deadbee" and "fa1afe1" in this example), so do not delete or edit the names. By replacing the command "pick" with the command "edit", you can tell `git rebase` to stop after applying that commit, so that you can edit the files and/or the commit message, amend the commit, and continue rebasing. To interrupt the rebase (just like an "edit" command would do, but without cherry-picking any commit first), use the "break" command. If you just want to edit the commit message for a commit, replace the command "pick" with the command "reword". To drop a commit, replace the command "pick" with "drop", or just delete the matching line. If you want to fold two or more commits into one, replace the command "pick" for the second and subsequent commits with "squash" or "fixup". If the commits had different authors, the folded commit will be attributed to the author of the first commit. The suggested commit message for the folded commit is the concatenation of the first commit’s message with those identified by "squash" commands, omitting the messages of commits identified by "fixup" commands, unless "fixup -c" is used. In that case the suggested commit message is only the message of the "fixup -c" commit, and an editor is opened allowing you to edit the message. The contents (patch) of the "fixup -c" commit are still incorporated into the folded commit. If there is more than one "fixup -c" commit, the message from the final one is used. You can also use "fixup -C" to get the same behavior as "fixup -c" except without opening an editor. `git rebase` will stop when "pick" has been replaced with "edit" or when a command fails due to merge errors. When you are done editing and/or resolving conflicts you can continue with `git rebase --continue`. For example, if you want to reorder the last 5 commits, such that what was `HEAD~4` becomes the new `HEAD`. To achieve that, you would call `git rebase` like this: ``` $ git rebase -i HEAD~5 ``` And move the first patch to the end of the list. You might want to recreate merge commits, e.g. if you have a history like this: ``` X \ A---M---B / ---o---O---P---Q ``` Suppose you want to rebase the side branch starting at "A" to "Q". Make sure that the current `HEAD` is "B", and call ``` $ git rebase -i -r --onto Q O ``` Reordering and editing commits usually creates untested intermediate steps. You may want to check that your history editing did not break anything by running a test, or at least recompiling at intermediate points in history by using the "exec" command (shortcut "x"). You may do so by creating a todo list like this one: ``` pick deadbee Implement feature XXX fixup f1a5c00 Fix to feature XXX exec make pick c0ffeee The oneline of the next commit edit deadbab The oneline of the commit after exec cd subdir; make test ... ``` The interactive rebase will stop when a command fails (i.e. exits with non-0 status) to give you an opportunity to fix the problem. You can continue with `git rebase --continue`. The "exec" command launches the command in a shell (the one specified in `$SHELL`, or the default shell if `$SHELL` is not set), so you can use shell features (like "cd", ">", ";" …​). The command is run from the root of the working tree. ``` $ git rebase -i --exec "make test" ``` This command lets you check that intermediate commits are compilable. The todo list becomes like that: ``` pick 5928aea one exec make test pick 04d0fda two exec make test pick ba46169 three exec make test pick f4593f9 four exec make test ``` Splitting commits ----------------- In interactive mode, you can mark commits with the action "edit". However, this does not necessarily mean that `git rebase` expects the result of this edit to be exactly one commit. Indeed, you can undo the commit, or you can add other commits. This can be used to split a commit into two: * Start an interactive rebase with `git rebase -i <commit>^`, where `<commit>` is the commit you want to split. In fact, any commit range will do, as long as it contains that commit. * Mark the commit you want to split with the action "edit". * When it comes to editing that commit, execute `git reset HEAD^`. The effect is that the `HEAD` is rewound by one, and the index follows suit. However, the working tree stays the same. * Now add the changes to the index that you want to have in the first commit. You can use `git add` (possibly interactively) or `git gui` (or both) to do that. * Commit the now-current index with whatever commit message is appropriate now. * Repeat the last two steps until your working tree is clean. * Continue the rebase with `git rebase --continue`. If you are not absolutely sure that the intermediate revisions are consistent (they compile, pass the testsuite, etc.) you should use `git stash` to stash away the not-yet-committed changes after each commit, test, and amend the commit if fixes are necessary. Recovering from upstream rebase ------------------------------- Rebasing (or any other form of rewriting) a branch that others have based work on is a bad idea: anyone downstream of it is forced to manually fix their history. This section explains how to do the fix from the downstream’s point of view. The real fix, however, would be to avoid rebasing the upstream in the first place. To illustrate, suppose you are in a situation where someone develops a `subsystem` branch, and you are working on a `topic` that is dependent on this `subsystem`. You might end up with a history like the following: ``` o---o---o---o---o---o---o---o master \ o---o---o---o---o subsystem \ *---*---* topic ``` If `subsystem` is rebased against `master`, the following happens: ``` o---o---o---o---o---o---o---o master \ \ o---o---o---o---o o'--o'--o'--o'--o' subsystem \ *---*---* topic ``` If you now continue development as usual, and eventually merge `topic` to `subsystem`, the commits from `subsystem` will remain duplicated forever: ``` o---o---o---o---o---o---o---o master \ \ o---o---o---o---o o'--o'--o'--o'--o'--M subsystem \ / *---*---*-..........-*--* topic ``` Such duplicates are generally frowned upon because they clutter up history, making it harder to follow. To clean things up, you need to transplant the commits on `topic` to the new `subsystem` tip, i.e., rebase `topic`. This becomes a ripple effect: anyone downstream from `topic` is forced to rebase too, and so on! There are two kinds of fixes, discussed in the following subsections: Easy case: The changes are literally the same. This happens if the `subsystem` rebase was a simple rebase and had no conflicts. Hard case: The changes are not the same. This happens if the `subsystem` rebase had conflicts, or used `--interactive` to omit, edit, squash, or fixup commits; or if the upstream used one of `commit --amend`, `reset`, or a full history rewriting command like [`filter-repo`](https://github.com/newren/git-filter-repo). ### The easy case Only works if the changes (patch IDs based on the diff contents) on `subsystem` are literally the same before and after the rebase `subsystem` did. In that case, the fix is easy because `git rebase` knows to skip changes that are already present in the new upstream (unless `--reapply-cherry-picks` is given). So if you say (assuming you’re on `topic`) ``` $ git rebase subsystem ``` you will end up with the fixed history ``` o---o---o---o---o---o---o---o master \ o'--o'--o'--o'--o' subsystem \ *---*---* topic ``` ### The hard case Things get more complicated if the `subsystem` changes do not exactly correspond to the ones before the rebase. | | | | --- | --- | | Note | While an "easy case recovery" sometimes appears to be successful even in the hard case, it may have unintended consequences. For example, a commit that was removed via `git rebase --interactive` will be **resurrected**! | The idea is to manually tell `git rebase` "where the old `subsystem` ended and your `topic` began", that is, what the old merge base between them was. You will have to find a way to name the last commit of the old `subsystem`, for example: * With the `subsystem` reflog: after `git fetch`, the old tip of `subsystem` is at `subsystem@{1}`. Subsequent fetches will increase the number. (See [git-reflog[1]](git-reflog).) * Relative to the tip of `topic`: knowing that your `topic` has three commits, the old tip of `subsystem` must be `topic~3`. You can then transplant the old `subsystem..topic` to the new tip by saying (for the reflog case, and assuming you are on `topic` already): ``` $ git rebase --onto subsystem subsystem@{1} ``` The ripple effect of a "hard case" recovery is especially bad: `everyone` downstream from `topic` will now have to perform a "hard case" recovery too! Rebasing merges --------------- The interactive rebase command was originally designed to handle individual patch series. As such, it makes sense to exclude merge commits from the todo list, as the developer may have merged the then-current `master` while working on the branch, only to rebase all the commits onto `master` eventually (skipping the merge commits). However, there are legitimate reasons why a developer may want to recreate merge commits: to keep the branch structure (or "commit topology") when working on multiple, inter-related branches. In the following example, the developer works on a topic branch that refactors the way buttons are defined, and on another topic branch that uses that refactoring to implement a "Report a bug" button. The output of `git log --graph --format=%s -5` may look like this: ``` * Merge branch 'report-a-bug' |\ | * Add the feedback button * | Merge branch 'refactor-button' |\ \ | |/ | * Use the Button class for all buttons | * Extract a generic Button class from the DownloadButton one ``` The developer might want to rebase those commits to a newer `master` while keeping the branch topology, for example when the first topic branch is expected to be integrated into `master` much earlier than the second one, say, to resolve merge conflicts with changes to the DownloadButton class that made it into `master`. This rebase can be performed using the `--rebase-merges` option. It will generate a todo list looking like this: ``` label onto # Branch: refactor-button reset onto pick 123456 Extract a generic Button class from the DownloadButton one pick 654321 Use the Button class for all buttons label refactor-button # Branch: report-a-bug reset refactor-button # Use the Button class for all buttons pick abcdef Add the feedback button label report-a-bug reset onto merge -C a1b2c3 refactor-button # Merge 'refactor-button' merge -C 6f5e4d report-a-bug # Merge 'report-a-bug' ``` In contrast to a regular interactive rebase, there are `label`, `reset` and `merge` commands in addition to `pick` ones. The `label` command associates a label with the current HEAD when that command is executed. These labels are created as worktree-local refs (`refs/rewritten/<label>`) that will be deleted when the rebase finishes. That way, rebase operations in multiple worktrees linked to the same repository do not interfere with one another. If the `label` command fails, it is rescheduled immediately, with a helpful message how to proceed. The `reset` command resets the HEAD, index and worktree to the specified revision. It is similar to an `exec git reset --hard <label>`, but refuses to overwrite untracked files. If the `reset` command fails, it is rescheduled immediately, with a helpful message how to edit the todo list (this typically happens when a `reset` command was inserted into the todo list manually and contains a typo). The `merge` command will merge the specified revision(s) into whatever is HEAD at that time. With `-C <original-commit>`, the commit message of the specified merge commit will be used. When the `-C` is changed to a lower-case `-c`, the message will be opened in an editor after a successful merge so that the user can edit the message. If a `merge` command fails for any reason other than merge conflicts (i.e. when the merge operation did not even start), it is rescheduled immediately. By default, the `merge` command will use the `ort` merge strategy for regular merges, and `octopus` for octopus merges. One can specify a default strategy for all merges using the `--strategy` argument when invoking rebase, or can override specific merges in the interactive list of commands by using an `exec` command to call `git merge` explicitly with a `--strategy` argument. Note that when calling `git merge` explicitly like this, you can make use of the fact that the labels are worktree-local refs (the ref `refs/rewritten/onto` would correspond to the label `onto`, for example) in order to refer to the branches you want to merge. Note: the first command (`label onto`) labels the revision onto which the commits are rebased; The name `onto` is just a convention, as a nod to the `--onto` option. It is also possible to introduce completely new merge commits from scratch by adding a command of the form `merge <merge-head>`. This form will generate a tentative commit message and always open an editor to let the user edit it. This can be useful e.g. when a topic branch turns out to address more than a single concern and wants to be split into two or even more topic branches. Consider this todo list: ``` pick 192837 Switch from GNU Makefiles to CMake pick 5a6c7e Document the switch to CMake pick 918273 Fix detection of OpenSSL in CMake pick afbecd http: add support for TLS v1.3 pick fdbaec Fix detection of cURL in CMake on Windows ``` The one commit in this list that is not related to CMake may very well have been motivated by working on fixing all those bugs introduced by switching to CMake, but it addresses a different concern. To split this branch into two topic branches, the todo list could be edited like this: ``` label onto pick afbecd http: add support for TLS v1.3 label tlsv1.3 reset onto pick 192837 Switch from GNU Makefiles to CMake pick 918273 Fix detection of OpenSSL in CMake pick fdbaec Fix detection of cURL in CMake on Windows pick 5a6c7e Document the switch to CMake label cmake reset onto merge tlsv1.3 merge cmake ``` Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: rebase.backend Default backend to use for rebasing. Possible choices are `apply` or `merge`. In the future, if the merge backend gains all remaining capabilities of the apply backend, this setting may become unused. rebase.stat Whether to show a diffstat of what changed upstream since the last rebase. False by default. rebase.autoSquash If set to true enable `--autosquash` option by default. rebase.autoStash When set to true, automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run rebase on a dirty worktree. However, use with care: the final stash application after a successful rebase might result in non-trivial conflicts. This option can be overridden by the `--no-autostash` and `--autostash` options of [git-rebase[1]](git-rebase). Defaults to false. rebase.updateRefs If set to true enable `--update-refs` option by default. rebase.missingCommitsCheck If set to "warn", git rebase -i will print a warning if some commits are removed (e.g. a line was deleted), however the rebase will still proceed. If set to "error", it will print the previous warning and stop the rebase, `git rebase --edit-todo` can then be used to correct the error. If set to "ignore", no checking is done. To drop a commit without warning or error, use the `drop` command in the todo list. Defaults to "ignore". rebase.instructionFormat A format string, as specified in [git-log[1]](git-log), to be used for the todo list during an interactive rebase. The format will automatically have the long commit hash prepended to the format. rebase.abbreviateCommands If set to true, `git rebase` will use abbreviated command names in the todo list resulting in something like this: ``` p deadbee The oneline of the commit p fa1afe1 The oneline of the next commit ... ``` instead of: ``` pick deadbee The oneline of the commit pick fa1afe1 The oneline of the next commit ... ``` Defaults to false. rebase.rescheduleFailedExec Automatically reschedule `exec` commands that failed. This only makes sense in interactive mode (or when an `--exec` option was provided). This is the same as specifying the `--reschedule-failed-exec` option. rebase.forkPoint If set to false set `--no-fork-point` option by default. sequence.editor Text editor used by `git rebase -i` for editing the rebase instruction file. The value is meant to be interpreted by the shell when it is used. It can be overridden by the `GIT_SEQUENCE_EDITOR` environment variable. When not configured the default commit message editor is used instead.
programming_docs
git gitrepository-layout gitrepository-layout ==================== Name ---- gitrepository-layout - Git Repository Layout Synopsis -------- $GIT\_DIR/\* Description ----------- A Git repository comes in two different flavours: * a `.git` directory at the root of the working tree; * a `<project>.git` directory that is a `bare` repository (i.e. without its own working tree), that is typically used for exchanging histories with others by pushing into it and fetching from it. **Note**: Also you can have a plain text file `.git` at the root of your working tree, containing `gitdir: <path>` to point at the real directory that has the repository. This mechanism is often used for a working tree of a submodule checkout, to allow you in the containing superproject to `git checkout` a branch that does not have the submodule. The `checkout` has to remove the entire submodule working tree, without losing the submodule repository. These things may exist in a Git repository. objects Object store associated with this repository. Usually an object store is self sufficient (i.e. all the objects that are referred to by an object found in it are also found in it), but there are a few ways to violate it. 1. You could have an incomplete but locally usable repository by creating a shallow clone. See [git-clone[1]](git-clone). 2. You could be using the `objects/info/alternates` or `$GIT_ALTERNATE_OBJECT_DIRECTORIES` mechanisms to `borrow` objects from other object stores. A repository with this kind of incomplete object store is not suitable to be published for use with dumb transports but otherwise is OK as long as `objects/info/alternates` points at the object stores it borrows from. This directory is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/objects" will be used instead. objects/[0-9a-f][0-9a-f] A newly created object is stored in its own file. The objects are splayed over 256 subdirectories using the first two characters of the sha1 object name to keep the number of directory entries in `objects` itself to a manageable number. Objects found here are often called `unpacked` (or `loose`) objects. objects/pack Packs (files that store many objects in compressed form, along with index files to allow them to be randomly accessed) are found in this directory. objects/info Additional information about the object store is recorded in this directory. objects/info/packs This file is to help dumb transports discover what packs are available in this object store. Whenever a pack is added or removed, `git update-server-info` should be run to keep this file up to date if the repository is published for dumb transports. `git repack` does this by default. objects/info/alternates This file records paths to alternate object stores that this object store borrows objects from, one pathname per line. Note that not only native Git tools use it locally, but the HTTP fetcher also tries to use it remotely; this will usually work if you have relative paths (relative to the object database, not to the repository!) in your alternates file, but it will not work if you use absolute paths unless the absolute path in filesystem and web URL is the same. See also `objects/info/http-alternates`. objects/info/http-alternates This file records URLs to alternate object stores that this object store borrows objects from, to be used when the repository is fetched over HTTP. refs References are stored in subdirectories of this directory. The `git prune` command knows to preserve objects reachable from refs found in this directory and its subdirectories. This directory is ignored (except refs/bisect, refs/rewritten and refs/worktree) if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/refs" will be used instead. refs/heads/`name` records tip-of-the-tree commit objects of branch `name` refs/tags/`name` records any object name (not necessarily a commit object, or a tag object that points at a commit object). refs/remotes/`name` records tip-of-the-tree commit objects of branches copied from a remote repository. refs/replace/`<obj-sha1>` records the SHA-1 of the object that replaces `<obj-sha1>`. This is similar to info/grafts and is internally used and maintained by [git-replace[1]](git-replace). Such refs can be exchanged between repositories while grafts are not. packed-refs records the same information as refs/heads/, refs/tags/, and friends record in a more efficient way. See [git-pack-refs[1]](git-pack-refs). This file is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/packed-refs" will be used instead. HEAD A symref (see glossary) to the `refs/heads/` namespace describing the currently active branch. It does not mean much if the repository is not associated with any working tree (i.e. a `bare` repository), but a valid Git repository **must** have the HEAD file; some porcelains may use it to guess the designated "default" branch of the repository (usually `master`). It is legal if the named branch `name` does not (yet) exist. In some legacy setups, it is a symbolic link instead of a symref that points at the current branch. HEAD can also record a specific commit directly, instead of being a symref to point at the current branch. Such a state is often called `detached HEAD.` See [git-checkout[1]](git-checkout) for details. config Repository specific configuration file. This file is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/config" will be used instead. config.worktree Working directory specific configuration file for the main working directory in multiple working directory setup (see [git-worktree[1]](git-worktree)). branches A slightly deprecated way to store shorthands to be used to specify a URL to `git fetch`, `git pull` and `git push`. A file can be stored as `branches/<name>` and then `name` can be given to these commands in place of `repository` argument. See the REMOTES section in [git-fetch[1]](git-fetch) for details. This mechanism is legacy and not likely to be found in modern repositories. This directory is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/branches" will be used instead. hooks Hooks are customization scripts used by various Git commands. A handful of sample hooks are installed when `git init` is run, but all of them are disabled by default. To enable, the `.sample` suffix has to be removed from the filename by renaming. Read [githooks[5]](githooks) for more details about each hook. This directory is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/hooks" will be used instead. common When multiple working trees are used, most of files in $GIT\_DIR are per-worktree with a few known exceptions. All files under `common` however will be shared between all working trees. index The current index file for the repository. It is usually not found in a bare repository. sharedindex.<SHA-1> The shared index part, to be referenced by $GIT\_DIR/index and other temporary index files. Only valid in split index mode. info Additional information about the repository is recorded in this directory. This directory is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/info" will be used instead. info/refs This file helps dumb transports discover what refs are available in this repository. If the repository is published for dumb transports, this file should be regenerated by `git update-server-info` every time a tag or branch is created or modified. This is normally done from the `hooks/update` hook, which is run by the `git-receive-pack` command when you `git push` into the repository. info/grafts This file records fake commit ancestry information, to pretend the set of parents a commit has is different from how the commit was actually created. One record per line describes a commit and its fake parents by listing their 40-byte hexadecimal object names separated by a space and terminated by a newline. Note that the grafts mechanism is outdated and can lead to problems transferring objects between repositories; see [git-replace[1]](git-replace) for a more flexible and robust system to do the same thing. info/exclude This file, by convention among Porcelains, stores the exclude pattern list. `.gitignore` is the per-directory ignore file. `git status`, `git add`, `git rm` and `git clean` look at it but the core Git commands do not look at it. See also: [gitignore[5]](gitignore). info/attributes Defines which attributes to assign to a path, similar to per-directory `.gitattributes` files. See also: [gitattributes[5]](gitattributes). info/sparse-checkout This file stores sparse checkout patterns. See also: [git-read-tree[1]](git-read-tree). remotes Stores shorthands for URL and default refnames for use when interacting with remote repositories via `git fetch`, `git pull` and `git push` commands. See the REMOTES section in [git-fetch[1]](git-fetch) for details. This mechanism is legacy and not likely to be found in modern repositories. This directory is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/remotes" will be used instead. logs Records of changes made to refs are stored in this directory. See [git-update-ref[1]](git-update-ref) for more information. This directory is ignored (except logs/HEAD) if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/logs" will be used instead. logs/refs/heads/`name` Records all changes made to the branch tip named `name`. logs/refs/tags/`name` Records all changes made to the tag named `name`. shallow This is similar to `info/grafts` but is internally used and maintained by shallow clone mechanism. See `--depth` option to [git-clone[1]](git-clone) and [git-fetch[1]](git-fetch). This file is ignored if $GIT\_COMMON\_DIR is set and "$GIT\_COMMON\_DIR/shallow" will be used instead. commondir If this file exists, $GIT\_COMMON\_DIR (see [git[1]](git)) will be set to the path specified in this file if it is not explicitly set. If the specified path is relative, it is relative to $GIT\_DIR. The repository with commondir is incomplete without the repository pointed by "commondir". modules Contains the git-repositories of the submodules. worktrees Contains administrative data for linked working trees. Each subdirectory contains the working tree-related part of a linked working tree. This directory is ignored if $GIT\_COMMON\_DIR is set, in which case "$GIT\_COMMON\_DIR/worktrees" will be used instead. worktrees/<id>/gitdir A text file containing the absolute path back to the .git file that points to here. This is used to check if the linked repository has been manually removed and there is no need to keep this directory any more. The mtime of this file should be updated every time the linked repository is accessed. worktrees/<id>/locked If this file exists, the linked working tree may be on a portable device and not available. The presence of this file prevents `worktrees/<id>` from being pruned either automatically or manually by `git worktree prune`. The file may contain a string explaining why the repository is locked. worktrees/<id>/config.worktree Working directory specific configuration file. Git repository format versions ------------------------------ Every git repository is marked with a numeric version in the `core.repositoryformatversion` key of its `config` file. This version specifies the rules for operating on the on-disk repository data. An implementation of git which does not understand a particular version advertised by an on-disk repository MUST NOT operate on that repository; doing so risks not only producing wrong results, but actually losing data. Because of this rule, version bumps should be kept to an absolute minimum. Instead, we generally prefer these strategies: * bumping format version numbers of individual data files (e.g., index, packfiles, etc). This restricts the incompatibilities only to those files. * introducing new data that gracefully degrades when used by older clients (e.g., pack bitmap files are ignored by older clients, which simply do not take advantage of the optimization they provide). A whole-repository format version bump should only be part of a change that cannot be independently versioned. For instance, if one were to change the reachability rules for objects, or the rules for locking refs, that would require a bump of the repository format version. Note that this applies only to accessing the repository’s disk contents directly. An older client which understands only format `0` may still connect via `git://` to a repository using format `1`, as long as the server process understands format `1`. The preferred strategy for rolling out a version bump (whether whole repository or for a single file) is to teach git to read the new format, and allow writing the new format with a config switch or command line option (for experimentation or for those who do not care about backwards compatibility with older gits). Then after a long period to allow the reading capability to become common, we may switch to writing the new format by default. The currently defined format versions are: ### Version `0` This is the format defined by the initial version of git, including but not limited to the format of the repository directory, the repository configuration file, and the object and ref storage. Specifying the complete behavior of git is beyond the scope of this document. ### Version `1` This format is identical to version `0`, with the following exceptions: 1. When reading the `core.repositoryformatversion` variable, a git implementation which supports version 1 MUST also read any configuration keys found in the `extensions` section of the configuration file. 2. If a version-1 repository specifies any `extensions.*` keys that the running git has not implemented, the operation MUST NOT proceed. Similarly, if the value of any known key is not understood by the implementation, the operation MUST NOT proceed. Note that if no extensions are specified in the config file, then `core.repositoryformatversion` SHOULD be set to `0` (setting it to `1` provides no benefit, and makes the repository incompatible with older implementations of git). This document will serve as the master list for extensions. Any implementation wishing to define a new extension should make a note of it here, in order to claim the name. The defined extensions are: #### `noop` This extension does not change git’s behavior at all. It is useful only for testing format-1 compatibility. #### `preciousObjects` When the config key `extensions.preciousObjects` is set to `true`, objects in the repository MUST NOT be deleted (e.g., by `git-prune` or `git repack -d`). #### `partialClone` When the config key `extensions.partialClone` is set, it indicates that the repo was created with a partial clone (or later performed a partial fetch) and that the remote may have omitted sending certain unwanted objects. Such a remote is called a "promisor remote" and it promises that all such omitted objects can be fetched from it in the future. The value of this key is the name of the promisor remote. #### `worktreeConfig` If set, by default "git config" reads from both "config" and "config.worktree" file from GIT\_DIR in that order. In multiple working directory mode, "config" file is shared while "config.worktree" is per-working directory (i.e., it’s in GIT\_COMMON\_DIR/worktrees/<id>/config.worktree) See also -------- [git-init[1]](git-init), [git-clone[1]](git-clone), [git-fetch[1]](git-fetch), [git-pack-refs[1]](git-pack-refs), [git-gc[1]](git-gc), [git-checkout[1]](git-checkout), [gitglossary[7]](gitglossary), [The Git User’s Manual](user-manual) git gitprotocol-pack gitprotocol-pack ================ Name ---- gitprotocol-pack - How packs are transferred over-the-wire Synopsis -------- ``` <over-the-wire-protocol> ``` Description ----------- Git supports transferring data in packfiles over the ssh://, git://, http:// and file:// transports. There exist two sets of protocols, one for pushing data from a client to a server and another for fetching data from a server to a client. The three transports (ssh, git, file) use the same protocol to transfer data. http is documented in [gitprotocol-http[5]](gitprotocol-http). The processes invoked in the canonical Git implementation are `upload-pack` on the server side and `fetch-pack` on the client side for fetching data; then `receive-pack` on the server and `send-pack` on the client for pushing data. The protocol functions to have a server tell a client what is currently on the server, then for the two to negotiate the smallest amount of data to send in order to fully update one or the other. Pkt-line format --------------- The descriptions below build on the pkt-line format described in [gitprotocol-common[5]](gitprotocol-common). When the grammar indicate `PKT-LINE(...)`, unless otherwise noted the usual pkt-line LF rules apply: the sender SHOULD include a LF, but the receiver MUST NOT complain if it is not present. An error packet is a special pkt-line that contains an error string. ``` error-line = PKT-LINE("ERR" SP explanation-text) ``` Throughout the protocol, where `PKT-LINE(...)` is expected, an error packet MAY be sent. Once this packet is sent by a client or a server, the data transfer process defined in this protocol is terminated. Transports ---------- There are three transports over which the packfile protocol is initiated. The Git transport is a simple, unauthenticated server that takes the command (almost always `upload-pack`, though Git servers can be configured to be globally writable, in which `receive- pack` initiation is also allowed) with which the client wishes to communicate and executes it and connects it to the requesting process. In the SSH transport, the client just runs the `upload-pack` or `receive-pack` process on the server over the SSH protocol and then communicates with that invoked process over the SSH connection. The file:// transport runs the `upload-pack` or `receive-pack` process locally and communicates with it over a pipe. Extra parameters ---------------- The protocol provides a mechanism in which clients can send additional information in its first message to the server. These are called "Extra Parameters", and are supported by the Git, SSH, and HTTP protocols. Each Extra Parameter takes the form of `<key>=<value>` or `<key>`. Servers that receive any such Extra Parameters MUST ignore all unrecognized keys. Currently, the only Extra Parameter recognized is "version" with a value of `1` or `2`. See [gitprotocol-v2[5]](gitprotocol-v2) for more information on protocol version 2. Git transport ------------- The Git transport starts off by sending the command and repository on the wire using the pkt-line format, followed by a NUL byte and a hostname parameter, terminated by a NUL byte. ``` 0033git-upload-pack /project.git\0host=myserver.com\0 ``` The transport may send Extra Parameters by adding an additional NUL byte, and then adding one or more NUL-terminated strings: ``` 003egit-upload-pack /project.git\0host=myserver.com\0\0version=1\0 ``` ``` git-proto-request = request-command SP pathname NUL [ host-parameter NUL ] [ NUL extra-parameters ] request-command = "git-upload-pack" / "git-receive-pack" / "git-upload-archive" ; case sensitive pathname = *( %x01-ff ) ; exclude NUL host-parameter = "host=" hostname [ ":" port ] extra-parameters = 1*extra-parameter extra-parameter = 1*( %x01-ff ) NUL ``` host-parameter is used for the git-daemon name based virtual hosting. See --interpolated-path option to git daemon, with the %H/%CH format characters. Basically what the Git client is doing to connect to an `upload-pack` process on the server side over the Git protocol is this: ``` $ echo -e -n \ "003agit-upload-pack /schacon/gitbook.git\0host=example.com\0" | nc -v example.com 9418 ``` Ssh transport ------------- Initiating the upload-pack or receive-pack processes over SSH is executing the binary on the server via SSH remote execution. It is basically equivalent to running this: ``` $ ssh git.example.com "git-upload-pack '/project.git'" ``` For a server to support Git pushing and pulling for a given user over SSH, that user needs to be able to execute one or both of those commands via the SSH shell that they are provided on login. On some systems, that shell access is limited to only being able to run those two commands, or even just one of them. In an ssh:// format URI, it’s absolute in the URI, so the `/` after the host name (or port number) is sent as an argument, which is then read by the remote git-upload-pack exactly as is, so it’s effectively an absolute path in the remote filesystem. ``` git clone ssh://[email protected]/project.git | v ssh [email protected] "git-upload-pack '/project.git'" ``` In a "user@host:path" format URI, its relative to the user’s home directory, because the Git client will run: ``` git clone [email protected]:project.git | v ssh [email protected] "git-upload-pack 'project.git'" ``` The exception is if a `~` is used, in which case we execute it without the leading `/`. ``` ssh://[email protected]/~alice/project.git, | v ssh [email protected] "git-upload-pack '~alice/project.git'" ``` Depending on the value of the `protocol.version` configuration variable, Git may attempt to send Extra Parameters as a colon-separated string in the GIT\_PROTOCOL environment variable. This is done only if the `ssh.variant` configuration variable indicates that the ssh command supports passing environment variables as an argument. A few things to remember here: * The "command name" is spelled with dash (e.g. git-upload-pack), but this can be overridden by the client; * The repository path is always quoted with single quotes. Fetching data from a server --------------------------- When one Git repository wants to get data that a second repository has, the first can `fetch` from the second. This operation determines what data the server has that the client does not then streams that data down to the client in packfile format. Reference discovery ------------------- When the client initially connects the server will immediately respond with a version number (if "version=1" is sent as an Extra Parameter), and a listing of each reference it has (all branches and tags) along with the object name that each reference currently points to. ``` $ echo -e -n "0045git-upload-pack /schacon/gitbook.git\0host=example.com\0\0version=1\0" | nc -v example.com 9418 000eversion 1 00887217a7c7e582c46cec22a130adf4b9d7d950fba0 HEAD\0multi_ack thin-pack side-band side-band-64k ofs-delta shallow no-progress include-tag 00441d3fcd5ced445d1abc402225c0b8a1299641f497 refs/heads/integration 003f7217a7c7e582c46cec22a130adf4b9d7d950fba0 refs/heads/master 003cb88d2441cac0977faf98efc80305012112238d9d refs/tags/v0.9 003c525128480b96c89e6418b1e40909bf6c5b2d580f refs/tags/v1.0 003fe92df48743b7bc7d26bcaabfddde0a1e20cae47c refs/tags/v1.0^{} 0000 ``` The returned response is a pkt-line stream describing each ref and its current value. The stream MUST be sorted by name according to the C locale ordering. If HEAD is a valid ref, HEAD MUST appear as the first advertised ref. If HEAD is not a valid ref, HEAD MUST NOT appear in the advertisement list at all, but other refs may still appear. The stream MUST include capability declarations behind a NUL on the first ref. The peeled value of a ref (that is "ref^{}") MUST be immediately after the ref itself, if presented. A conforming server MUST peel the ref if it’s an annotated tag. ``` advertised-refs = *1("version 1") (no-refs / list-of-refs) *shallow flush-pkt no-refs = PKT-LINE(zero-id SP "capabilities^{}" NUL capability-list) list-of-refs = first-ref *other-ref first-ref = PKT-LINE(obj-id SP refname NUL capability-list) other-ref = PKT-LINE(other-tip / other-peeled) other-tip = obj-id SP refname other-peeled = obj-id SP refname "^{}" shallow = PKT-LINE("shallow" SP obj-id) capability-list = capability *(SP capability) capability = 1*(LC_ALPHA / DIGIT / "-" / "_") LC_ALPHA = %x61-7A ``` Server and client MUST use lowercase for obj-id, both MUST treat obj-id as case-insensitive. See protocol-capabilities.txt for a list of allowed server capabilities and descriptions. Packfile negotiation -------------------- After reference and capabilities discovery, the client can decide to terminate the connection by sending a flush-pkt, telling the server it can now gracefully terminate, and disconnect, when it does not need any pack data. This can happen with the ls-remote command, and also can happen when the client already is up to date. Otherwise, it enters the negotiation phase, where the client and server determine what the minimal packfile necessary for transport is, by telling the server what objects it wants, its shallow objects (if any), and the maximum commit depth it wants (if any). The client will also send a list of the capabilities it wants to be in effect, out of what the server said it could do with the first `want` line. ``` upload-request = want-list *shallow-line *1depth-request [filter-request] flush-pkt want-list = first-want *additional-want shallow-line = PKT-LINE("shallow" SP obj-id) depth-request = PKT-LINE("deepen" SP depth) / PKT-LINE("deepen-since" SP timestamp) / PKT-LINE("deepen-not" SP ref) first-want = PKT-LINE("want" SP obj-id SP capability-list) additional-want = PKT-LINE("want" SP obj-id) depth = 1*DIGIT filter-request = PKT-LINE("filter" SP filter-spec) ``` Clients MUST send all the obj-ids it wants from the reference discovery phase as `want` lines. Clients MUST send at least one `want` command in the request body. Clients MUST NOT mention an obj-id in a `want` command which did not appear in the response obtained through ref discovery. The client MUST write all obj-ids which it only has shallow copies of (meaning that it does not have the parents of a commit) as `shallow` lines so that the server is aware of the limitations of the client’s history. The client now sends the maximum commit history depth it wants for this transaction, which is the number of commits it wants from the tip of the history, if any, as a `deepen` line. A depth of 0 is the same as not making a depth request. The client does not want to receive any commits beyond this depth, nor does it want objects needed only to complete those commits. Commits whose parents are not received as a result are defined as shallow and marked as such in the server. This information is sent back to the client in the next step. The client can optionally request that pack-objects omit various objects from the packfile using one of several filtering techniques. These are intended for use with partial clone and partial fetch operations. An object that does not meet a filter-spec value is omitted unless explicitly requested in a `want` line. See `rev-list` for possible filter-spec values. Once all the `want’s and 'shallow’s (and optional 'deepen`) are transferred, clients MUST send a flush-pkt, to tell the server side that it is done sending the list. Otherwise, if the client sent a positive depth request, the server will determine which commits will and will not be shallow and send this information to the client. If the client did not request a positive depth, this step is skipped. ``` shallow-update = *shallow-line *unshallow-line flush-pkt shallow-line = PKT-LINE("shallow" SP obj-id) unshallow-line = PKT-LINE("unshallow" SP obj-id) ``` If the client has requested a positive depth, the server will compute the set of commits which are no deeper than the desired depth. The set of commits start at the client’s wants. The server writes `shallow` lines for each commit whose parents will not be sent as a result. The server writes an `unshallow` line for each commit which the client has indicated is shallow, but is no longer shallow at the currently requested depth (that is, its parents will now be sent). The server MUST NOT mark as unshallow anything which the client has not indicated was shallow. Now the client will send a list of the obj-ids it has using `have` lines, so the server can make a packfile that only contains the objects that the client needs. In multi\_ack mode, the canonical implementation will send up to 32 of these at a time, then will send a flush-pkt. The canonical implementation will skip ahead and send the next 32 immediately, so that there is always a block of 32 "in-flight on the wire" at a time. ``` upload-haves = have-list compute-end have-list = *have-line have-line = PKT-LINE("have" SP obj-id) compute-end = flush-pkt / PKT-LINE("done") ``` If the server reads `have` lines, it then will respond by ACKing any of the obj-ids the client said it had that the server also has. The server will ACK obj-ids differently depending on which ack mode is chosen by the client. In multi\_ack mode: * the server will respond with `ACK obj-id continue` for any common commits. * once the server has found an acceptable common base commit and is ready to make a packfile, it will blindly ACK all `have` obj-ids back to the client. * the server will then send a `NAK` and then wait for another response from the client - either a `done` or another list of `have` lines. In multi\_ack\_detailed mode: * the server will differentiate the ACKs where it is signaling that it is ready to send data with `ACK obj-id ready` lines, and signals the identified common commits with `ACK obj-id common` lines. Without either multi\_ack or multi\_ack\_detailed: * upload-pack sends "ACK obj-id" on the first common object it finds. After that it says nothing until the client gives it a "done". * upload-pack sends "NAK" on a flush-pkt if no common object has been found yet. If one has been found, and thus an ACK was already sent, it’s silent on the flush-pkt. After the client has gotten enough ACK responses that it can determine that the server has enough information to send an efficient packfile (in the canonical implementation, this is determined when it has received enough ACKs that it can color everything left in the --date-order queue as common with the server, or the --date-order queue is empty), or the client determines that it wants to give up (in the canonical implementation, this is determined when the client sends 256 `have` lines without getting any of them ACKed by the server - meaning there is nothing in common and the server should just send all of its objects), then the client will send a `done` command. The `done` command signals to the server that the client is ready to receive its packfile data. However, the 256 limit **only** turns on in the canonical client implementation if we have received at least one "ACK %s continue" during a prior round. This helps to ensure that at least one common ancestor is found before we give up entirely. Once the `done` line is read from the client, the server will either send a final `ACK obj-id` or it will send a `NAK`. `obj-id` is the object name of the last commit determined to be common. The server only sends ACK after `done` if there is at least one common base and multi\_ack or multi\_ack\_detailed is enabled. The server always sends NAK after `done` if there is no common base found. Instead of `ACK` or `NAK`, the server may send an error message (for example, if it does not recognize an object in a `want` line received from the client). Then the server will start sending its packfile data. ``` server-response = *ack_multi ack / nak ack_multi = PKT-LINE("ACK" SP obj-id ack_status) ack_status = "continue" / "common" / "ready" ack = PKT-LINE("ACK" SP obj-id) nak = PKT-LINE("NAK") ``` A simple clone may look like this (with no `have` lines): ``` C: 0054want 74730d410fcb6603ace96f1dc55ea6196122532d multi_ack \ side-band-64k ofs-delta\n C: 0032want 7d1665144a3a975c05f1f43902ddaf084e784dbe\n C: 0032want 5a3f6be755bbb7deae50065988cbfa1ffa9ab68a\n C: 0032want 7e47fe2bd8d01d481f44d7af0531bd93d3b21c01\n C: 0032want 74730d410fcb6603ace96f1dc55ea6196122532d\n C: 0000 C: 0009done\n S: 0008NAK\n S: [PACKFILE] ``` An incremental update (fetch) response might look like this: ``` C: 0054want 74730d410fcb6603ace96f1dc55ea6196122532d multi_ack \ side-band-64k ofs-delta\n C: 0032want 7d1665144a3a975c05f1f43902ddaf084e784dbe\n C: 0032want 5a3f6be755bbb7deae50065988cbfa1ffa9ab68a\n C: 0000 C: 0032have 7e47fe2bd8d01d481f44d7af0531bd93d3b21c01\n C: [30 more have lines] C: 0032have 74730d410fcb6603ace96f1dc55ea6196122532d\n C: 0000 S: 003aACK 7e47fe2bd8d01d481f44d7af0531bd93d3b21c01 continue\n S: 003aACK 74730d410fcb6603ace96f1dc55ea6196122532d continue\n S: 0008NAK\n C: 0009done\n S: 0031ACK 74730d410fcb6603ace96f1dc55ea6196122532d\n S: [PACKFILE] ``` Packfile data ------------- Now that the client and server have finished negotiation about what the minimal amount of data that needs to be sent to the client is, the server will construct and send the required data in packfile format. See [gitformat-pack[5]](gitformat-pack) for what the packfile itself actually looks like. If `side-band` or `side-band-64k` capabilities have been specified by the client, the server will send the packfile data multiplexed. Each packet starting with the packet-line length of the amount of data that follows, followed by a single byte specifying the sideband the following data is coming in on. In `side-band` mode, it will send up to 999 data bytes plus 1 control code, for a total of up to 1000 bytes in a pkt-line. In `side-band-64k` mode it will send up to 65519 data bytes plus 1 control code, for a total of up to 65520 bytes in a pkt-line. The sideband byte will be a `1`, `2` or a `3`. Sideband `1` will contain packfile data, sideband `2` will be used for progress information that the client will generally print to stderr and sideband `3` is used for error information. If no `side-band` capability was specified, the server will stream the entire packfile without multiplexing. Pushing data to a server ------------------------ Pushing data to a server will invoke the `receive-pack` process on the server, which will allow the client to tell it which references it should update and then send all the data the server will need for those new references to be complete. Once all the data is received and validated, the server will then update its references to what the client specified. Authentication -------------- The protocol itself contains no authentication mechanisms. That is to be handled by the transport, such as SSH, before the `receive-pack` process is invoked. If `receive-pack` is configured over the Git transport, those repositories will be writable by anyone who can access that port (9418) as that transport is unauthenticated. Reference discovery ------------------- The reference discovery phase is done nearly the same way as it is in the fetching protocol. Each reference obj-id and name on the server is sent in packet-line format to the client, followed by a flush-pkt. The only real difference is that the capability listing is different - the only possible values are `report-status`, `report-status-v2`, `delete-refs`, `ofs-delta`, `atomic` and `push-options`. Reference update request and packfile transfer ---------------------------------------------- Once the client knows what references the server is at, it can send a list of reference update requests. For each reference on the server that it wants to update, it sends a line listing the obj-id currently on the server, the obj-id the client would like to update it to and the name of the reference. This list is followed by a flush-pkt. ``` update-requests = *shallow ( command-list | push-cert ) shallow = PKT-LINE("shallow" SP obj-id) command-list = PKT-LINE(command NUL capability-list) *PKT-LINE(command) flush-pkt command = create / delete / update create = zero-id SP new-id SP name delete = old-id SP zero-id SP name update = old-id SP new-id SP name old-id = obj-id new-id = obj-id push-cert = PKT-LINE("push-cert" NUL capability-list LF) PKT-LINE("certificate version 0.1" LF) PKT-LINE("pusher" SP ident LF) PKT-LINE("pushee" SP url LF) PKT-LINE("nonce" SP nonce LF) *PKT-LINE("push-option" SP push-option LF) PKT-LINE(LF) *PKT-LINE(command LF) *PKT-LINE(gpg-signature-lines LF) PKT-LINE("push-cert-end" LF) push-option = 1*( VCHAR | SP ) ``` If the server has advertised the `push-options` capability and the client has specified `push-options` as part of the capability list above, the client then sends its push options followed by a flush-pkt. ``` push-options = *PKT-LINE(push-option) flush-pkt ``` For backwards compatibility with older Git servers, if the client sends a push cert and push options, it MUST send its push options both embedded within the push cert and after the push cert. (Note that the push options within the cert are prefixed, but the push options after the cert are not.) Both these lists MUST be the same, modulo the prefix. After that the packfile that should contain all the objects that the server will need to complete the new references will be sent. ``` packfile = "PACK" 28*(OCTET) ``` If the receiving end does not support delete-refs, the sending end MUST NOT ask for delete command. If the receiving end does not support push-cert, the sending end MUST NOT send a push-cert command. When a push-cert command is sent, command-list MUST NOT be sent; the commands recorded in the push certificate is used instead. The packfile MUST NOT be sent if the only command used is `delete`. A packfile MUST be sent if either create or update command is used, even if the server already has all the necessary objects. In this case the client MUST send an empty packfile. The only time this is likely to happen is if the client is creating a new branch or a tag that points to an existing obj-id. The server will receive the packfile, unpack it, then validate each reference that is being updated that it hasn’t changed while the request was being processed (the obj-id is still the same as the old-id), and it will run any update hooks to make sure that the update is acceptable. If all of that is fine, the server will then update the references. Push certificate ---------------- A push certificate begins with a set of header lines. After the header and an empty line, the protocol commands follow, one per line. Note that the trailing LF in push-cert PKT-LINEs is `not` optional; it must be present. Currently, the following header fields are defined: `pusher` ident Identify the GPG key in "Human Readable Name <email@address>" format. `pushee` url The repository URL (anonymized, if the URL contains authentication material) the user who ran `git push` intended to push into. `nonce` nonce The `nonce` string the receiving repository asked the pushing user to include in the certificate, to prevent replay attacks. The GPG signature lines are a detached signature for the contents recorded in the push certificate before the signature block begins. The detached signature is used to certify that the commands were given by the pusher, who must be the signer. Report status ------------- After receiving the pack data from the sender, the receiver sends a report if `report-status` or `report-status-v2` capability is in effect. It is a short listing of what happened in that update. It will first list the status of the packfile unpacking as either `unpack ok` or `unpack [error]`. Then it will list the status for each of the references that it tried to update. Each line is either `ok [refname]` if the update was successful, or `ng [refname] [error]` if the update was not. ``` report-status = unpack-status 1*(command-status) flush-pkt unpack-status = PKT-LINE("unpack" SP unpack-result) unpack-result = "ok" / error-msg command-status = command-ok / command-fail command-ok = PKT-LINE("ok" SP refname) command-fail = PKT-LINE("ng" SP refname SP error-msg) error-msg = 1*(OCTET) ; where not "ok" ``` The `report-status-v2` capability extends the protocol by adding new option lines in order to support reporting of reference rewritten by the `proc-receive` hook. The `proc-receive` hook may handle a command for a pseudo-reference which may create or update one or more references, and each reference may have different name, different new-oid, and different old-oid. ``` report-status-v2 = unpack-status 1*(command-status-v2) flush-pkt unpack-status = PKT-LINE("unpack" SP unpack-result) unpack-result = "ok" / error-msg command-status-v2 = command-ok-v2 / command-fail command-ok-v2 = command-ok *option-line command-ok = PKT-LINE("ok" SP refname) command-fail = PKT-LINE("ng" SP refname SP error-msg) error-msg = 1*(OCTET) ; where not "ok" option-line = *1(option-refname) *1(option-old-oid) *1(option-new-oid) *1(option-forced-update) option-refname = PKT-LINE("option" SP "refname" SP refname) option-old-oid = PKT-LINE("option" SP "old-oid" SP obj-id) option-new-oid = PKT-LINE("option" SP "new-oid" SP obj-id) option-force = PKT-LINE("option" SP "forced-update") ``` Updates can be unsuccessful for a number of reasons. The reference can have changed since the reference discovery phase was originally sent, meaning someone pushed in the meantime. The reference being pushed could be a non-fast-forward reference and the update hooks or configuration could be set to not allow that, etc. Also, some references can be updated while others can be rejected. An example client/server communication might look like this: ``` S: 006274730d410fcb6603ace96f1dc55ea6196122532d refs/heads/local\0report-status delete-refs ofs-delta\n S: 003e7d1665144a3a975c05f1f43902ddaf084e784dbe refs/heads/debug\n S: 003f74730d410fcb6603ace96f1dc55ea6196122532d refs/heads/master\n S: 003d74730d410fcb6603ace96f1dc55ea6196122532d refs/heads/team\n S: 0000 C: 00677d1665144a3a975c05f1f43902ddaf084e784dbe 74730d410fcb6603ace96f1dc55ea6196122532d refs/heads/debug\n C: 006874730d410fcb6603ace96f1dc55ea6196122532d 5a3f6be755bbb7deae50065988cbfa1ffa9ab68a refs/heads/master\n C: 0000 C: [PACKDATA] S: 000eunpack ok\n S: 0018ok refs/heads/debug\n S: 002ang refs/heads/master non-fast-forward\n ```
programming_docs
git git-var git-var ======= Name ---- git-var - Show a Git logical variable Synopsis -------- ``` git var (-l | <variable>) ``` Description ----------- Prints a Git logical variable. Options ------- -l Cause the logical variables to be listed. In addition, all the variables of the Git configuration file .git/config are listed as well. (However, the configuration variables listing functionality is deprecated in favor of `git config -l`.) Examples -------- ``` $ git var GIT_AUTHOR_IDENT Eric W. Biederman <[email protected]> 1121223278 -0600 ``` Variables --------- GIT\_AUTHOR\_IDENT The author of a piece of code. GIT\_COMMITTER\_IDENT The person who put a piece of code into Git. GIT\_EDITOR Text editor for use by Git commands. The value is meant to be interpreted by the shell when it is used. Examples: `~/bin/vi`, `$SOME_ENVIRONMENT_VARIABLE`, `"C:\Program Files\Vim\gvim.exe" --nofork`. The order of preference is the `$GIT_EDITOR` environment variable, then `core.editor` configuration, then `$VISUAL`, then `$EDITOR`, and then the default chosen at compile time, which is usually `vi`. GIT\_PAGER Text viewer for use by Git commands (e.g., `less`). The value is meant to be interpreted by the shell. The order of preference is the `$GIT_PAGER` environment variable, then `core.pager` configuration, then `$PAGER`, and then the default chosen at compile time (usually `less`). GIT\_DEFAULT\_BRANCH The name of the first branch created in newly initialized repositories. See also -------- [git-commit-tree[1]](git-commit-tree) [git-tag[1]](git-tag) [git-config[1]](git-config) git git-fast-import git-fast-import =============== Name ---- git-fast-import - Backend for fast Git data importers Synopsis -------- ``` frontend | git fast-import [<options>] ``` Description ----------- This program is usually not what the end user wants to run directly. Most end users want to use one of the existing frontend programs, which parses a specific type of foreign source and feeds the contents stored there to `git fast-import`. fast-import reads a mixed command/data stream from standard input and writes one or more packfiles directly into the current repository. When EOF is received on standard input, fast import writes out updated branch and tag refs, fully updating the current repository with the newly imported data. The fast-import backend itself can import into an empty repository (one that has already been initialized by `git init`) or incrementally update an existing populated repository. Whether or not incremental imports are supported from a particular foreign source depends on the frontend program in use. Options ------- --force Force updating modified existing branches, even if doing so would cause commits to be lost (as the new commit does not contain the old commit). --quiet Disable the output shown by --stats, making fast-import usually be silent when it is successful. However, if the import stream has directives intended to show user output (e.g. `progress` directives), the corresponding messages will still be shown. --stats Display some basic statistics about the objects fast-import has created, the packfiles they were stored into, and the memory used by fast-import during this run. Showing this output is currently the default, but can be disabled with --quiet. --allow-unsafe-features Many command-line options can be provided as part of the fast-import stream itself by using the `feature` or `option` commands. However, some of these options are unsafe (e.g., allowing fast-import to access the filesystem outside of the repository). These options are disabled by default, but can be allowed by providing this option on the command line. This currently impacts only the `export-marks`, `import-marks`, and `import-marks-if-exists` feature commands. ``` Only enable this option if you trust the program generating the fast-import stream! This option is enabled automatically for remote-helpers that use the `import` capability, as they are already trusted to run their own code. ``` ### Options for Frontends --cat-blob-fd=<fd> Write responses to `get-mark`, `cat-blob`, and `ls` queries to the file descriptor <fd> instead of `stdout`. Allows `progress` output intended for the end-user to be separated from other output. --date-format=<fmt> Specify the type of dates the frontend will supply to fast-import within `author`, `committer` and `tagger` commands. See “Date Formats” below for details about which formats are supported, and their syntax. --done Terminate with error if there is no `done` command at the end of the stream. This option might be useful for detecting errors that cause the frontend to terminate before it has started to write a stream. ### Locations of Marks Files --export-marks=<file> Dumps the internal marks table to <file> when complete. Marks are written one per line as `:markid SHA-1`. Frontends can use this file to validate imports after they have been completed, or to save the marks table across incremental runs. As <file> is only opened and truncated at checkpoint (or completion) the same path can also be safely given to --import-marks. --import-marks=<file> Before processing any input, load the marks specified in <file>. The input file must exist, must be readable, and must use the same format as produced by --export-marks. Multiple options may be supplied to import more than one set of marks. If a mark is defined to different values, the last file wins. --import-marks-if-exists=<file> Like --import-marks but instead of erroring out, silently skips the file if it does not exist. --[no-]relative-marks After specifying --relative-marks the paths specified with --import-marks= and --export-marks= are relative to an internal directory in the current repository. In git-fast-import this means that the paths are relative to the .git/info/fast-import directory. However, other importers may use a different location. Relative and non-relative marks may be combined by interweaving --(no-)-relative-marks with the --(import|export)-marks= options. ### Submodule Rewriting --rewrite-submodules-from=<name>:<file> --rewrite-submodules-to=<name>:<file> Rewrite the object IDs for the submodule specified by <name> from the values used in the from <file> to those used in the to <file>. The from marks should have been created by `git fast-export`, and the to marks should have been created by `git fast-import` when importing that same submodule. <name> may be any arbitrary string not containing a colon character, but the same value must be used with both options when specifying corresponding marks. Multiple submodules may be specified with different values for <name>. It is an error not to use these options in corresponding pairs. These options are primarily useful when converting a repository from one hash algorithm to another; without them, fast-import will fail if it encounters a submodule because it has no way of writing the object ID into the new hash algorithm. ### Performance and Compression Tuning --active-branches=<n> Maximum number of branches to maintain active at once. See “Memory Utilization” below for details. Default is 5. --big-file-threshold=<n> Maximum size of a blob that fast-import will attempt to create a delta for, expressed in bytes. The default is 512m (512 MiB). Some importers may wish to lower this on systems with constrained memory. --depth=<n> Maximum delta depth, for blob and tree deltification. Default is 50. --export-pack-edges=<file> After creating a packfile, print a line of data to <file> listing the filename of the packfile and the last commit on each branch that was written to that packfile. This information may be useful after importing projects whose total object set exceeds the 4 GiB packfile limit, as these commits can be used as edge points during calls to `git pack-objects`. --max-pack-size=<n> Maximum size of each output packfile. The default is unlimited. fastimport.unpackLimit See [git-config[1]](git-config) Performance ----------- The design of fast-import allows it to import large projects in a minimum amount of memory usage and processing time. Assuming the frontend is able to keep up with fast-import and feed it a constant stream of data, import times for projects holding 10+ years of history and containing 100,000+ individual commits are generally completed in just 1-2 hours on quite modest (~$2,000 USD) hardware. Most bottlenecks appear to be in foreign source data access (the source just cannot extract revisions fast enough) or disk IO (fast-import writes as fast as the disk will take the data). Imports will run faster if the source data is stored on a different drive than the destination Git repository (due to less IO contention). Development cost ---------------- A typical frontend for fast-import tends to weigh in at approximately 200 lines of Perl/Python/Ruby code. Most developers have been able to create working importers in just a couple of hours, even though it is their first exposure to fast-import, and sometimes even to Git. This is an ideal situation, given that most conversion tools are throw-away (use once, and never look back). Parallel operation ------------------ Like `git push` or `git fetch`, imports handled by fast-import are safe to run alongside parallel `git repack -a -d` or `git gc` invocations, or any other Git operation (including `git prune`, as loose objects are never used by fast-import). fast-import does not lock the branch or tag refs it is actively importing. After the import, during its ref update phase, fast-import tests each existing branch ref to verify the update will be a fast-forward update (the commit stored in the ref is contained in the new history of the commit to be written). If the update is not a fast-forward update, fast-import will skip updating that ref and instead prints a warning message. fast-import will always attempt to update all branch refs, and does not stop on the first failure. Branch updates can be forced with --force, but it’s recommended that this only be used on an otherwise quiet repository. Using --force is not necessary for an initial import into an empty repository. Technical discussion -------------------- fast-import tracks a set of branches in memory. Any branch can be created or modified at any point during the import process by sending a `commit` command on the input stream. This design allows a frontend program to process an unlimited number of branches simultaneously, generating commits in the order they are available from the source data. It also simplifies the frontend programs considerably. fast-import does not use or alter the current working directory, or any file within it. (It does however update the current Git repository, as referenced by `GIT_DIR`.) Therefore an import frontend may use the working directory for its own purposes, such as extracting file revisions from the foreign source. This ignorance of the working directory also allows fast-import to run very quickly, as it does not need to perform any costly file update operations when switching between branches. Input format ------------ With the exception of raw file data (which Git does not interpret) the fast-import input format is text (ASCII) based. This text based format simplifies development and debugging of frontend programs, especially when a higher level language such as Perl, Python or Ruby is being used. fast-import is very strict about its input. Where we say SP below we mean **exactly** one space. Likewise LF means one (and only one) linefeed and HT one (and only one) horizontal tab. Supplying additional whitespace characters will cause unexpected results, such as branch names or file names with leading or trailing spaces in their name, or early termination of fast-import when it encounters unexpected input. ### Stream Comments To aid in debugging frontends fast-import ignores any line that begins with `#` (ASCII pound/hash) up to and including the line ending `LF`. A comment line may contain any sequence of bytes that does not contain an LF and therefore may be used to include any detailed debugging information that might be specific to the frontend and useful when inspecting a fast-import data stream. ### Date Formats The following date formats are supported. A frontend should select the format it will use for this import by passing the format name in the --date-format=<fmt> command-line option. `raw` This is the Git native format and is `<time> SP <offutc>`. It is also fast-import’s default format, if --date-format was not specified. The time of the event is specified by `<time>` as the number of seconds since the UNIX epoch (midnight, Jan 1, 1970, UTC) and is written as an ASCII decimal integer. The local offset is specified by `<offutc>` as a positive or negative offset from UTC. For example EST (which is 5 hours behind UTC) would be expressed in `<tz>` by “-0500” while UTC is “+0000”. The local offset does not affect `<time>`; it is used only as an advisement to help formatting routines display the timestamp. If the local offset is not available in the source material, use “+0000”, or the most common local offset. For example many organizations have a CVS repository which has only ever been accessed by users who are located in the same location and time zone. In this case a reasonable offset from UTC could be assumed. Unlike the `rfc2822` format, this format is very strict. Any variation in formatting will cause fast-import to reject the value, and some sanity checks on the numeric values may also be performed. `raw-permissive` This is the same as `raw` except that no sanity checks on the numeric epoch and local offset are performed. This can be useful when trying to filter or import an existing history with e.g. bogus timezone values. `rfc2822` This is the standard email format as described by RFC 2822. An example value is “Tue Feb 6 11:22:18 2007 -0500”. The Git parser is accurate, but a little on the lenient side. It is the same parser used by `git am` when applying patches received from email. Some malformed strings may be accepted as valid dates. In some of these cases Git will still be able to obtain the correct date from the malformed string. There are also some types of malformed strings which Git will parse wrong, and yet consider valid. Seriously malformed strings will be rejected. Unlike the `raw` format above, the time zone/UTC offset information contained in an RFC 2822 date string is used to adjust the date value to UTC prior to storage. Therefore it is important that this information be as accurate as possible. If the source material uses RFC 2822 style dates, the frontend should let fast-import handle the parsing and conversion (rather than attempting to do it itself) as the Git parser has been well tested in the wild. Frontends should prefer the `raw` format if the source material already uses UNIX-epoch format, can be coaxed to give dates in that format, or its format is easily convertible to it, as there is no ambiguity in parsing. `now` Always use the current time and time zone. The literal `now` must always be supplied for `<when>`. This is a toy format. The current time and time zone of this system is always copied into the identity string at the time it is being created by fast-import. There is no way to specify a different time or time zone. This particular format is supplied as it’s short to implement and may be useful to a process that wants to create a new commit right now, without needing to use a working directory or `git update-index`. If separate `author` and `committer` commands are used in a `commit` the timestamps may not match, as the system clock will be polled twice (once for each command). The only way to ensure that both author and committer identity information has the same timestamp is to omit `author` (thus copying from `committer`) or to use a date format other than `now`. ### Commands fast-import accepts several commands to update the current repository and control the current import process. More detailed discussion (with examples) of each command follows later. `commit` Creates a new branch or updates an existing branch by creating a new commit and updating the branch to point at the newly created commit. `tag` Creates an annotated tag object from an existing commit or branch. Lightweight tags are not supported by this command, as they are not recommended for recording meaningful points in time. `reset` Reset an existing branch (or a new branch) to a specific revision. This command must be used to change a branch to a specific revision without making a commit on it. `blob` Convert raw file data into a blob, for future use in a `commit` command. This command is optional and is not needed to perform an import. `alias` Record that a mark refers to a given object without first creating any new object. Using --import-marks and referring to missing marks will cause fast-import to fail, so aliases can provide a way to set otherwise pruned commits to a valid value (e.g. the nearest non-pruned ancestor). `checkpoint` Forces fast-import to close the current packfile, generate its unique SHA-1 checksum and index, and start a new packfile. This command is optional and is not needed to perform an import. `progress` Causes fast-import to echo the entire line to its own standard output. This command is optional and is not needed to perform an import. `done` Marks the end of the stream. This command is optional unless the `done` feature was requested using the `--done` command-line option or `feature done` command. `get-mark` Causes fast-import to print the SHA-1 corresponding to a mark to the file descriptor set with `--cat-blob-fd`, or `stdout` if unspecified. `cat-blob` Causes fast-import to print a blob in `cat-file --batch` format to the file descriptor set with `--cat-blob-fd` or `stdout` if unspecified. `ls` Causes fast-import to print a line describing a directory entry in `ls-tree` format to the file descriptor set with `--cat-blob-fd` or `stdout` if unspecified. `feature` Enable the specified feature. This requires that fast-import supports the specified feature, and aborts if it does not. `option` Specify any of the options listed under OPTIONS that do not change stream semantic to suit the frontend’s needs. This command is optional and is not needed to perform an import. ### `commit` Create or update a branch with a new commit, recording one logical change to the project. ``` 'commit' SP <ref> LF mark? original-oid? ('author' (SP <name>)? SP LT <email> GT SP <when> LF)? 'committer' (SP <name>)? SP LT <email> GT SP <when> LF ('encoding' SP <encoding>)? data ('from' SP <commit-ish> LF)? ('merge' SP <commit-ish> LF)* (filemodify | filedelete | filecopy | filerename | filedeleteall | notemodify)* LF? ``` where `<ref>` is the name of the branch to make the commit on. Typically branch names are prefixed with `refs/heads/` in Git, so importing the CVS branch symbol `RELENG-1_0` would use `refs/heads/RELENG-1_0` for the value of `<ref>`. The value of `<ref>` must be a valid refname in Git. As `LF` is not valid in a Git refname, no quoting or escaping syntax is supported here. A `mark` command may optionally appear, requesting fast-import to save a reference to the newly created commit for future use by the frontend (see below for format). It is very common for frontends to mark every commit they create, thereby allowing future branch creation from any imported commit. The `data` command following `committer` must supply the commit message (see below for `data` command syntax). To import an empty commit message use a 0 length data. Commit messages are free-form and are not interpreted by Git. Currently they must be encoded in UTF-8, as fast-import does not permit other encodings to be specified. Zero or more `filemodify`, `filedelete`, `filecopy`, `filerename`, `filedeleteall` and `notemodify` commands may be included to update the contents of the branch prior to creating the commit. These commands may be supplied in any order. However it is recommended that a `filedeleteall` command precede all `filemodify`, `filecopy`, `filerename` and `notemodify` commands in the same commit, as `filedeleteall` wipes the branch clean (see below). The `LF` after the command is optional (it used to be required). Note that for reasons of backward compatibility, if the commit ends with a `data` command (i.e. it has no `from`, `merge`, `filemodify`, `filedelete`, `filecopy`, `filerename`, `filedeleteall` or `notemodify` commands) then two `LF` commands may appear at the end of the command instead of just one. #### `author` An `author` command may optionally appear, if the author information might differ from the committer information. If `author` is omitted then fast-import will automatically use the committer’s information for the author portion of the commit. See below for a description of the fields in `author`, as they are identical to `committer`. #### `committer` The `committer` command indicates who made this commit, and when they made it. Here `<name>` is the person’s display name (for example “Com M Itter”) and `<email>` is the person’s email address (“[email protected]”). `LT` and `GT` are the literal less-than (\x3c) and greater-than (\x3e) symbols. These are required to delimit the email address from the other fields in the line. Note that `<name>` and `<email>` are free-form and may contain any sequence of bytes, except `LT`, `GT` and `LF`. `<name>` is typically UTF-8 encoded. The time of the change is specified by `<when>` using the date format that was selected by the --date-format=<fmt> command-line option. See “Date Formats” above for the set of supported formats, and their syntax. #### `encoding` The optional `encoding` command indicates the encoding of the commit message. Most commits are UTF-8 and the encoding is omitted, but this allows importing commit messages into git without first reencoding them. #### `from` The `from` command is used to specify the commit to initialize this branch from. This revision will be the first ancestor of the new commit. The state of the tree built at this commit will begin with the state at the `from` commit, and be altered by the content modifications in this commit. Omitting the `from` command in the first commit of a new branch will cause fast-import to create that commit with no ancestor. This tends to be desired only for the initial commit of a project. If the frontend creates all files from scratch when making a new branch, a `merge` command may be used instead of `from` to start the commit with an empty tree. Omitting the `from` command on existing branches is usually desired, as the current commit on that branch is automatically assumed to be the first ancestor of the new commit. As `LF` is not valid in a Git refname or SHA-1 expression, no quoting or escaping syntax is supported within `<commit-ish>`. Here `<commit-ish>` is any of the following: * The name of an existing branch already in fast-import’s internal branch table. If fast-import doesn’t know the name, it’s treated as a SHA-1 expression. * A mark reference, `:<idnum>`, where `<idnum>` is the mark number. The reason fast-import uses `:` to denote a mark reference is this character is not legal in a Git branch name. The leading `:` makes it easy to distinguish between the mark 42 (`:42`) and the branch 42 (`42` or `refs/heads/42`), or an abbreviated SHA-1 which happened to consist only of base-10 digits. Marks must be declared (via `mark`) before they can be used. * A complete 40 byte or abbreviated commit SHA-1 in hex. * Any valid Git SHA-1 expression that resolves to a commit. See “SPECIFYING REVISIONS” in [gitrevisions[7]](gitrevisions) for details. * The special null SHA-1 (40 zeros) specifies that the branch is to be removed. The special case of restarting an incremental import from the current branch value should be written as: ``` from refs/heads/branch^0 ``` The `^0` suffix is necessary as fast-import does not permit a branch to start from itself, and the branch is created in memory before the `from` command is even read from the input. Adding `^0` will force fast-import to resolve the commit through Git’s revision parsing library, rather than its internal branch table, thereby loading in the existing value of the branch. #### `merge` Includes one additional ancestor commit. The additional ancestry link does not change the way the tree state is built at this commit. If the `from` command is omitted when creating a new branch, the first `merge` commit will be the first ancestor of the current commit, and the branch will start out with no files. An unlimited number of `merge` commands per commit are permitted by fast-import, thereby establishing an n-way merge. Here `<commit-ish>` is any of the commit specification expressions also accepted by `from` (see above). #### `filemodify` Included in a `commit` command to add a new file or change the content of an existing file. This command has two different means of specifying the content of the file. External data format The data content for the file was already supplied by a prior `blob` command. The frontend just needs to connect it. ``` 'M' SP <mode> SP <dataref> SP <path> LF ``` Here usually `<dataref>` must be either a mark reference (`:<idnum>`) set by a prior `blob` command, or a full 40-byte SHA-1 of an existing Git blob object. If `<mode>` is `040000`` then `<dataref>` must be the full 40-byte SHA-1 of an existing Git tree object or a mark reference set with `--import-marks`. Inline data format The data content for the file has not been supplied yet. The frontend wants to supply it as part of this modify command. ``` 'M' SP <mode> SP 'inline' SP <path> LF data ``` See below for a detailed description of the `data` command. In both formats `<mode>` is the type of file entry, specified in octal. Git only supports the following modes: * `100644` or `644`: A normal (not-executable) file. The majority of files in most projects use this mode. If in doubt, this is what you want. * `100755` or `755`: A normal, but executable, file. * `120000`: A symlink, the content of the file will be the link target. * `160000`: A gitlink, SHA-1 of the object refers to a commit in another repository. Git links can only be specified by SHA or through a commit mark. They are used to implement submodules. * `040000`: A subdirectory. Subdirectories can only be specified by SHA or through a tree mark set with `--import-marks`. In both formats `<path>` is the complete path of the file to be added (if not already existing) or modified (if already existing). A `<path>` string must use UNIX-style directory separators (forward slash `/`), may contain any byte other than `LF`, and must not start with double quote (`"`). A path can use C-style string quoting; this is accepted in all cases and mandatory if the filename starts with double quote or contains `LF`. In C-style quoting, the complete name should be surrounded with double quotes, and any `LF`, backslash, or double quote characters must be escaped by preceding them with a backslash (e.g., `"path/with\n, \\ and \" in it"`). The value of `<path>` must be in canonical form. That is it must not: * contain an empty directory component (e.g. `foo//bar` is invalid), * end with a directory separator (e.g. `foo/` is invalid), * start with a directory separator (e.g. `/foo` is invalid), * contain the special component `.` or `..` (e.g. `foo/./bar` and `foo/../bar` are invalid). The root of the tree can be represented by an empty string as `<path>`. It is recommended that `<path>` always be encoded using UTF-8. #### `filedelete` Included in a `commit` command to remove a file or recursively delete an entire directory from the branch. If the file or directory removal makes its parent directory empty, the parent directory will be automatically removed too. This cascades up the tree until the first non-empty directory or the root is reached. ``` 'D' SP <path> LF ``` here `<path>` is the complete path of the file or subdirectory to be removed from the branch. See `filemodify` above for a detailed description of `<path>`. #### `filecopy` Recursively copies an existing file or subdirectory to a different location within the branch. The existing file or directory must exist. If the destination exists it will be completely replaced by the content copied from the source. ``` 'C' SP <path> SP <path> LF ``` here the first `<path>` is the source location and the second `<path>` is the destination. See `filemodify` above for a detailed description of what `<path>` may look like. To use a source path that contains SP the path must be quoted. A `filecopy` command takes effect immediately. Once the source location has been copied to the destination any future commands applied to the source location will not impact the destination of the copy. #### `filerename` Renames an existing file or subdirectory to a different location within the branch. The existing file or directory must exist. If the destination exists it will be replaced by the source directory. ``` 'R' SP <path> SP <path> LF ``` here the first `<path>` is the source location and the second `<path>` is the destination. See `filemodify` above for a detailed description of what `<path>` may look like. To use a source path that contains SP the path must be quoted. A `filerename` command takes effect immediately. Once the source location has been renamed to the destination any future commands applied to the source location will create new files there and not impact the destination of the rename. Note that a `filerename` is the same as a `filecopy` followed by a `filedelete` of the source location. There is a slight performance advantage to using `filerename`, but the advantage is so small that it is never worth trying to convert a delete/add pair in source material into a rename for fast-import. This `filerename` command is provided just to simplify frontends that already have rename information and don’t want bother with decomposing it into a `filecopy` followed by a `filedelete`. #### `filedeleteall` Included in a `commit` command to remove all files (and also all directories) from the branch. This command resets the internal branch structure to have no files in it, allowing the frontend to subsequently add all interesting files from scratch. ``` 'deleteall' LF ``` This command is extremely useful if the frontend does not know (or does not care to know) what files are currently on the branch, and therefore cannot generate the proper `filedelete` commands to update the content. Issuing a `filedeleteall` followed by the needed `filemodify` commands to set the correct content will produce the same results as sending only the needed `filemodify` and `filedelete` commands. The `filedeleteall` approach may however require fast-import to use slightly more memory per active branch (less than 1 MiB for even most large projects); so frontends that can easily obtain only the affected paths for a commit are encouraged to do so. #### `notemodify` Included in a `commit` `<notes_ref>` command to add a new note annotating a `<commit-ish>` or change this annotation contents. Internally it is similar to filemodify 100644 on `<commit-ish>` path (maybe split into subdirectories). It’s not advised to use any other commands to write to the `<notes_ref>` tree except `filedeleteall` to delete all existing notes in this tree. This command has two different means of specifying the content of the note. External data format The data content for the note was already supplied by a prior `blob` command. The frontend just needs to connect it to the commit that is to be annotated. ``` 'N' SP <dataref> SP <commit-ish> LF ``` Here `<dataref>` can be either a mark reference (`:<idnum>`) set by a prior `blob` command, or a full 40-byte SHA-1 of an existing Git blob object. Inline data format The data content for the note has not been supplied yet. The frontend wants to supply it as part of this modify command. ``` 'N' SP 'inline' SP <commit-ish> LF data ``` See below for a detailed description of the `data` command. In both formats `<commit-ish>` is any of the commit specification expressions also accepted by `from` (see above). ### `mark` Arranges for fast-import to save a reference to the current object, allowing the frontend to recall this object at a future point in time, without knowing its SHA-1. Here the current object is the object creation command the `mark` command appears within. This can be `commit`, `tag`, and `blob`, but `commit` is the most common usage. ``` 'mark' SP ':' <idnum> LF ``` where `<idnum>` is the number assigned by the frontend to this mark. The value of `<idnum>` is expressed as an ASCII decimal integer. The value 0 is reserved and cannot be used as a mark. Only values greater than or equal to 1 may be used as marks. New marks are created automatically. Existing marks can be moved to another object simply by reusing the same `<idnum>` in another `mark` command. ### `original-oid` Provides the name of the object in the original source control system. fast-import will simply ignore this directive, but filter processes which operate on and modify the stream before feeding to fast-import may have uses for this information ``` 'original-oid' SP <object-identifier> LF ``` where `<object-identifier>` is any string not containing LF. ### `tag` Creates an annotated tag referring to a specific commit. To create lightweight (non-annotated) tags see the `reset` command below. ``` 'tag' SP <name> LF mark? 'from' SP <commit-ish> LF original-oid? 'tagger' (SP <name>)? SP LT <email> GT SP <when> LF data ``` where `<name>` is the name of the tag to create. Tag names are automatically prefixed with `refs/tags/` when stored in Git, so importing the CVS branch symbol `RELENG-1_0-FINAL` would use just `RELENG-1_0-FINAL` for `<name>`, and fast-import will write the corresponding ref as `refs/tags/RELENG-1_0-FINAL`. The value of `<name>` must be a valid refname in Git and therefore may contain forward slashes. As `LF` is not valid in a Git refname, no quoting or escaping syntax is supported here. The `from` command is the same as in the `commit` command; see above for details. The `tagger` command uses the same format as `committer` within `commit`; again see above for details. The `data` command following `tagger` must supply the annotated tag message (see below for `data` command syntax). To import an empty tag message use a 0 length data. Tag messages are free-form and are not interpreted by Git. Currently they must be encoded in UTF-8, as fast-import does not permit other encodings to be specified. Signing annotated tags during import from within fast-import is not supported. Trying to include your own PGP/GPG signature is not recommended, as the frontend does not (easily) have access to the complete set of bytes which normally goes into such a signature. If signing is required, create lightweight tags from within fast-import with `reset`, then create the annotated versions of those tags offline with the standard `git tag` process. ### `reset` Creates (or recreates) the named branch, optionally starting from a specific revision. The reset command allows a frontend to issue a new `from` command for an existing branch, or to create a new branch from an existing commit without creating a new commit. ``` 'reset' SP <ref> LF ('from' SP <commit-ish> LF)? LF? ``` For a detailed description of `<ref>` and `<commit-ish>` see above under `commit` and `from`. The `LF` after the command is optional (it used to be required). The `reset` command can also be used to create lightweight (non-annotated) tags. For example: ``` reset refs/tags/938 from :938 ``` would create the lightweight tag `refs/tags/938` referring to whatever commit mark `:938` references. ### `blob` Requests writing one file revision to the packfile. The revision is not connected to any commit; this connection must be formed in a subsequent `commit` command by referencing the blob through an assigned mark. ``` 'blob' LF mark? original-oid? data ``` The mark command is optional here as some frontends have chosen to generate the Git SHA-1 for the blob on their own, and feed that directly to `commit`. This is typically more work than it’s worth however, as marks are inexpensive to store and easy to use. ### `data` Supplies raw data (for use as blob/file content, commit messages, or annotated tag messages) to fast-import. Data can be supplied using an exact byte count or delimited with a terminating line. Real frontends intended for production-quality conversions should always use the exact byte count format, as it is more robust and performs better. The delimited format is intended primarily for testing fast-import. Comment lines appearing within the `<raw>` part of `data` commands are always taken to be part of the body of the data and are therefore never ignored by fast-import. This makes it safe to import any file/message content whose lines might start with `#`. Exact byte count format The frontend must specify the number of bytes of data. ``` 'data' SP <count> LF <raw> LF? ``` where `<count>` is the exact number of bytes appearing within `<raw>`. The value of `<count>` is expressed as an ASCII decimal integer. The `LF` on either side of `<raw>` is not included in `<count>` and will not be included in the imported data. The `LF` after `<raw>` is optional (it used to be required) but recommended. Always including it makes debugging a fast-import stream easier as the next command always starts in column 0 of the next line, even if `<raw>` did not end with an `LF`. Delimited format A delimiter string is used to mark the end of the data. fast-import will compute the length by searching for the delimiter. This format is primarily useful for testing and is not recommended for real data. ``` 'data' SP '<<' <delim> LF <raw> LF <delim> LF LF? ``` where `<delim>` is the chosen delimiter string. The string `<delim>` must not appear on a line by itself within `<raw>`, as otherwise fast-import will think the data ends earlier than it really does. The `LF` immediately trailing `<raw>` is part of `<raw>`. This is one of the limitations of the delimited format, it is impossible to supply a data chunk which does not have an LF as its last byte. The `LF` after `<delim> LF` is optional (it used to be required). ### `alias` Record that a mark refers to a given object without first creating any new object. ``` 'alias' LF mark 'to' SP <commit-ish> LF LF? ``` For a detailed description of `<commit-ish>` see above under `from`. ### `checkpoint` Forces fast-import to close the current packfile, start a new one, and to save out all current branch refs, tags and marks. ``` 'checkpoint' LF LF? ``` Note that fast-import automatically switches packfiles when the current packfile reaches --max-pack-size, or 4 GiB, whichever limit is smaller. During an automatic packfile switch fast-import does not update the branch refs, tags or marks. As a `checkpoint` can require a significant amount of CPU time and disk IO (to compute the overall pack SHA-1 checksum, generate the corresponding index file, and update the refs) it can easily take several minutes for a single `checkpoint` command to complete. Frontends may choose to issue checkpoints during extremely large and long running imports, or when they need to allow another Git process access to a branch. However given that a 30 GiB Subversion repository can be loaded into Git through fast-import in about 3 hours, explicit checkpointing may not be necessary. The `LF` after the command is optional (it used to be required). ### `progress` Causes fast-import to print the entire `progress` line unmodified to its standard output channel (file descriptor 1) when the command is processed from the input stream. The command otherwise has no impact on the current import, or on any of fast-import’s internal state. ``` 'progress' SP <any> LF LF? ``` The `<any>` part of the command may contain any sequence of bytes that does not contain `LF`. The `LF` after the command is optional. Callers may wish to process the output through a tool such as sed to remove the leading part of the line, for example: ``` frontend | git fast-import | sed 's/^progress //' ``` Placing a `progress` command immediately after a `checkpoint` will inform the reader when the `checkpoint` has been completed and it can safely access the refs that fast-import updated. ### `get-mark` Causes fast-import to print the SHA-1 corresponding to a mark to stdout or to the file descriptor previously arranged with the `--cat-blob-fd` argument. The command otherwise has no impact on the current import; its purpose is to retrieve SHA-1s that later commits might want to refer to in their commit messages. ``` 'get-mark' SP ':' <idnum> LF ``` See “Responses To Commands” below for details about how to read this output safely. ### `cat-blob` Causes fast-import to print a blob to a file descriptor previously arranged with the `--cat-blob-fd` argument. The command otherwise has no impact on the current import; its main purpose is to retrieve blobs that may be in fast-import’s memory but not accessible from the target repository. ``` 'cat-blob' SP <dataref> LF ``` The `<dataref>` can be either a mark reference (`:<idnum>`) set previously or a full 40-byte SHA-1 of a Git blob, preexisting or ready to be written. Output uses the same format as `git cat-file --batch`: ``` <sha1> SP 'blob' SP <size> LF <contents> LF ``` This command can be used where a `filemodify` directive can appear, allowing it to be used in the middle of a commit. For a `filemodify` using an inline directive, it can also appear right before the `data` directive. See “Responses To Commands” below for details about how to read this output safely. ### `ls` Prints information about the object at a path to a file descriptor previously arranged with the `--cat-blob-fd` argument. This allows printing a blob from the active commit (with `cat-blob`) or copying a blob or tree from a previous commit for use in the current one (with `filemodify`). The `ls` command can also be used where a `filemodify` directive can appear, allowing it to be used in the middle of a commit. Reading from the active commit This form can only be used in the middle of a `commit`. The path names a directory entry within fast-import’s active commit. The path must be quoted in this case. ``` 'ls' SP <path> LF ``` Reading from a named tree The `<dataref>` can be a mark reference (`:<idnum>`) or the full 40-byte SHA-1 of a Git tag, commit, or tree object, preexisting or waiting to be written. The path is relative to the top level of the tree named by `<dataref>`. ``` 'ls' SP <dataref> SP <path> LF ``` See `filemodify` above for a detailed description of `<path>`. Output uses the same format as `git ls-tree <tree> -- <path>`: ``` <mode> SP ('blob' | 'tree' | 'commit') SP <dataref> HT <path> LF ``` The <dataref> represents the blob, tree, or commit object at <path> and can be used in later `get-mark`, `cat-blob`, `filemodify`, or `ls` commands. If there is no file or subtree at that path, `git fast-import` will instead report ``` missing SP <path> LF ``` See “Responses To Commands” below for details about how to read this output safely. ### `feature` Require that fast-import supports the specified feature, or abort if it does not. ``` 'feature' SP <feature> ('=' <argument>)? LF ``` The <feature> part of the command may be any one of the following: date-format export-marks relative-marks no-relative-marks force Act as though the corresponding command-line option with a leading `--` was passed on the command line (see OPTIONS, above). import-marks import-marks-if-exists Like --import-marks except in two respects: first, only one "feature import-marks" or "feature import-marks-if-exists" command is allowed per stream; second, an --import-marks= or --import-marks-if-exists command-line option overrides any of these "feature" commands in the stream; third, "feature import-marks-if-exists" like a corresponding command-line option silently skips a nonexistent file. get-mark cat-blob ls Require that the backend support the `get-mark`, `cat-blob`, or `ls` command respectively. Versions of fast-import not supporting the specified command will exit with a message indicating so. This lets the import error out early with a clear message, rather than wasting time on the early part of an import before the unsupported command is detected. notes Require that the backend support the `notemodify` (N) subcommand to the `commit` command. Versions of fast-import not supporting notes will exit with a message indicating so. done Error out if the stream ends without a `done` command. Without this feature, errors causing the frontend to end abruptly at a convenient point in the stream can go undetected. This may occur, for example, if an import front end dies in mid-operation without emitting SIGTERM or SIGKILL at its subordinate git fast-import instance. ### `option` Processes the specified option so that git fast-import behaves in a way that suits the frontend’s needs. Note that options specified by the frontend are overridden by any options the user may specify to git fast-import itself. ``` 'option' SP <option> LF ``` The `<option>` part of the command may contain any of the options listed in the OPTIONS section that do not change import semantics, without the leading `--` and is treated in the same way. Option commands must be the first commands on the input (not counting feature commands), to give an option command after any non-option command is an error. The following command-line options change import semantics and may therefore not be passed as option: * date-format * import-marks * export-marks * cat-blob-fd * force ### `done` If the `done` feature is not in use, treated as if EOF was read. This can be used to tell fast-import to finish early. If the `--done` command-line option or `feature done` command is in use, the `done` command is mandatory and marks the end of the stream. Responses to commands --------------------- New objects written by fast-import are not available immediately. Most fast-import commands have no visible effect until the next checkpoint (or completion). The frontend can send commands to fill fast-import’s input pipe without worrying about how quickly they will take effect, which improves performance by simplifying scheduling. For some frontends, though, it is useful to be able to read back data from the current repository as it is being updated (for example when the source material describes objects in terms of patches to be applied to previously imported objects). This can be accomplished by connecting the frontend and fast-import via bidirectional pipes: ``` mkfifo fast-import-output frontend <fast-import-output | git fast-import >fast-import-output ``` A frontend set up this way can use `progress`, `get-mark`, `ls`, and `cat-blob` commands to read information from the import in progress. To avoid deadlock, such frontends must completely consume any pending output from `progress`, `ls`, `get-mark`, and `cat-blob` before performing writes to fast-import that might block. Crash reports ------------- If fast-import is supplied invalid input it will terminate with a non-zero exit status and create a crash report in the top level of the Git repository it was importing into. Crash reports contain a snapshot of the internal fast-import state as well as the most recent commands that lead up to the crash. All recent commands (including stream comments, file changes and progress commands) are shown in the command history within the crash report, but raw file data and commit messages are excluded from the crash report. This exclusion saves space within the report file and reduces the amount of buffering that fast-import must perform during execution. After writing a crash report fast-import will close the current packfile and export the marks table. This allows the frontend developer to inspect the repository state and resume the import from the point where it crashed. The modified branches and tags are not updated during a crash, as the import did not complete successfully. Branch and tag information can be found in the crash report and must be applied manually if the update is needed. An example crash: ``` $ cat >in <<END_OF_INPUT # my very first test commit commit refs/heads/master committer Shawn O. Pearce <spearce> 19283 -0400 # who is that guy anyway? data <<EOF this is my commit EOF M 644 inline .gitignore data <<EOF .gitignore EOF M 777 inline bob END_OF_INPUT ``` ``` $ git fast-import <in fatal: Corrupt mode: M 777 inline bob fast-import: dumping crash report to .git/fast_import_crash_8434 ``` ``` $ cat .git/fast_import_crash_8434 fast-import crash report: fast-import process: 8434 parent process : 1391 at Sat Sep 1 00:58:12 2007 ``` ``` fatal: Corrupt mode: M 777 inline bob ``` ``` Most Recent Commands Before Crash --------------------------------- # my very first test commit commit refs/heads/master committer Shawn O. Pearce <spearce> 19283 -0400 # who is that guy anyway? data <<EOF M 644 inline .gitignore data <<EOF * M 777 inline bob ``` ``` Active Branch LRU ----------------- active_branches = 1 cur, 5 max ``` ``` pos clock name ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1) 0 refs/heads/master ``` ``` Inactive Branches ----------------- refs/heads/master: status : active loaded dirty tip commit : 0000000000000000000000000000000000000000 old tree : 0000000000000000000000000000000000000000 cur tree : 0000000000000000000000000000000000000000 commit clock: 0 last pack : ``` ``` ------------------- END OF CRASH REPORT ``` Tips and tricks --------------- The following tips and tricks have been collected from various users of fast-import, and are offered here as suggestions. ### Use One Mark Per Commit When doing a repository conversion, use a unique mark per commit (`mark :<n>`) and supply the --export-marks option on the command line. fast-import will dump a file which lists every mark and the Git object SHA-1 that corresponds to it. If the frontend can tie the marks back to the source repository, it is easy to verify the accuracy and completeness of the import by comparing each Git commit to the corresponding source revision. Coming from a system such as Perforce or Subversion this should be quite simple, as the fast-import mark can also be the Perforce changeset number or the Subversion revision number. ### Freely Skip Around Branches Don’t bother trying to optimize the frontend to stick to one branch at a time during an import. Although doing so might be slightly faster for fast-import, it tends to increase the complexity of the frontend code considerably. The branch LRU builtin to fast-import tends to behave very well, and the cost of activating an inactive branch is so low that bouncing around between branches has virtually no impact on import performance. ### Handling Renames When importing a renamed file or directory, simply delete the old name(s) and modify the new name(s) during the corresponding commit. Git performs rename detection after-the-fact, rather than explicitly during a commit. ### Use Tag Fixup Branches Some other SCM systems let the user create a tag from multiple files which are not from the same commit/changeset. Or to create tags which are a subset of the files available in the repository. Importing these tags as-is in Git is impossible without making at least one commit which “fixes up” the files to match the content of the tag. Use fast-import’s `reset` command to reset a dummy branch outside of your normal branch space to the base commit for the tag, then commit one or more file fixup commits, and finally tag the dummy branch. For example since all normal branches are stored under `refs/heads/` name the tag fixup branch `TAG_FIXUP`. This way it is impossible for the fixup branch used by the importer to have namespace conflicts with real branches imported from the source (the name `TAG_FIXUP` is not `refs/heads/TAG_FIXUP`). When committing fixups, consider using `merge` to connect the commit(s) which are supplying file revisions to the fixup branch. Doing so will allow tools such as `git blame` to track through the real commit history and properly annotate the source files. After fast-import terminates the frontend will need to do `rm .git/TAG_FIXUP` to remove the dummy branch. ### Import Now, Repack Later As soon as fast-import completes the Git repository is completely valid and ready for use. Typically this takes only a very short time, even for considerably large projects (100,000+ commits). However repacking the repository is necessary to improve data locality and access performance. It can also take hours on extremely large projects (especially if -f and a large --window parameter is used). Since repacking is safe to run alongside readers and writers, run the repack in the background and let it finish when it finishes. There is no reason to wait to explore your new Git project! If you choose to wait for the repack, don’t try to run benchmarks or performance tests until repacking is completed. fast-import outputs suboptimal packfiles that are simply never seen in real use situations. ### Repacking Historical Data If you are repacking very old imported data (e.g. older than the last year), consider expending some extra CPU time and supplying --window=50 (or higher) when you run `git repack`. This will take longer, but will also produce a smaller packfile. You only need to expend the effort once, and everyone using your project will benefit from the smaller repository. ### Include Some Progress Messages Every once in a while have your frontend emit a `progress` message to fast-import. The contents of the messages are entirely free-form, so one suggestion would be to output the current month and year each time the current commit date moves into the next month. Your users will feel better knowing how much of the data stream has been processed. Packfile optimization --------------------- When packing a blob fast-import always attempts to deltify against the last blob written. Unless specifically arranged for by the frontend, this will probably not be a prior version of the same file, so the generated delta will not be the smallest possible. The resulting packfile will be compressed, but will not be optimal. Frontends which have efficient access to all revisions of a single file (for example reading an RCS/CVS ,v file) can choose to supply all revisions of that file as a sequence of consecutive `blob` commands. This allows fast-import to deltify the different file revisions against each other, saving space in the final packfile. Marks can be used to later identify individual file revisions during a sequence of `commit` commands. The packfile(s) created by fast-import do not encourage good disk access patterns. This is caused by fast-import writing the data in the order it is received on standard input, while Git typically organizes data within packfiles to make the most recent (current tip) data appear before historical data. Git also clusters commits together, speeding up revision traversal through better cache locality. For this reason it is strongly recommended that users repack the repository with `git repack -a -d` after fast-import completes, allowing Git to reorganize the packfiles for faster data access. If blob deltas are suboptimal (see above) then also adding the `-f` option to force recomputation of all deltas can significantly reduce the final packfile size (30-50% smaller can be quite typical). Instead of running `git repack` you can also run `git gc --aggressive`, which will also optimize other things after an import (e.g. pack loose refs). As noted in the "AGGRESSIVE" section in [git-gc[1]](git-gc) the `--aggressive` option will find new deltas with the `-f` option to [git-repack[1]](git-repack). For the reasons elaborated on above using `--aggressive` after a fast-import is one of the few cases where it’s known to be worthwhile. Memory utilization ------------------ There are a number of factors which affect how much memory fast-import requires to perform an import. Like critical sections of core Git, fast-import uses its own memory allocators to amortize any overheads associated with malloc. In practice fast-import tends to amortize any malloc overheads to 0, due to its use of large block allocations. ### per object fast-import maintains an in-memory structure for every object written in this execution. On a 32 bit system the structure is 32 bytes, on a 64 bit system the structure is 40 bytes (due to the larger pointer sizes). Objects in the table are not deallocated until fast-import terminates. Importing 2 million objects on a 32 bit system will require approximately 64 MiB of memory. The object table is actually a hashtable keyed on the object name (the unique SHA-1). This storage configuration allows fast-import to reuse an existing or already written object and avoid writing duplicates to the output packfile. Duplicate blobs are surprisingly common in an import, typically due to branch merges in the source. ### per mark Marks are stored in a sparse array, using 1 pointer (4 bytes or 8 bytes, depending on pointer size) per mark. Although the array is sparse, frontends are still strongly encouraged to use marks between 1 and n, where n is the total number of marks required for this import. ### per branch Branches are classified as active and inactive. The memory usage of the two classes is significantly different. Inactive branches are stored in a structure which uses 96 or 120 bytes (32 bit or 64 bit systems, respectively), plus the length of the branch name (typically under 200 bytes), per branch. fast-import will easily handle as many as 10,000 inactive branches in under 2 MiB of memory. Active branches have the same overhead as inactive branches, but also contain copies of every tree that has been recently modified on that branch. If subtree `include` has not been modified since the branch became active, its contents will not be loaded into memory, but if subtree `src` has been modified by a commit since the branch became active, then its contents will be loaded in memory. As active branches store metadata about the files contained on that branch, their in-memory storage size can grow to a considerable size (see below). fast-import automatically moves active branches to inactive status based on a simple least-recently-used algorithm. The LRU chain is updated on each `commit` command. The maximum number of active branches can be increased or decreased on the command line with --active-branches=. ### per active tree Trees (aka directories) use just 12 bytes of memory on top of the memory required for their entries (see “per active file” below). The cost of a tree is virtually 0, as its overhead amortizes out over the individual file entries. ### per active file entry Files (and pointers to subtrees) within active trees require 52 or 64 bytes (32/64 bit platforms) per entry. To conserve space, file and tree names are pooled in a common string table, allowing the filename “Makefile” to use just 16 bytes (after including the string header overhead) no matter how many times it occurs within the project. The active branch LRU, when coupled with the filename string pool and lazy loading of subtrees, allows fast-import to efficiently import projects with 2,000+ branches and 45,114+ files in a very limited memory footprint (less than 2.7 MiB per active branch). Signals ------- Sending **SIGUSR1** to the `git fast-import` process ends the current packfile early, simulating a `checkpoint` command. The impatient operator can use this facility to peek at the objects and refs from an import in progress, at the cost of some added running time and worse compression. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: fastimport.unpackLimit If the number of objects imported by [git-fast-import[1]](git-fast-import) is below this limit, then the objects will be unpacked into loose object files. However if the number of imported objects equals or exceeds this limit then the pack will be stored as a pack. Storing the pack from a fast-import can make the import operation complete faster, especially on slow filesystems. If not set, the value of `transfer.unpackLimit` is used instead. See also -------- [git-fast-export[1]](git-fast-export)
programming_docs
git git-web--browse git-web--browse =============== Name ---- git-web—​browse - Git helper script to launch a web browser Synopsis -------- ``` git web--browse [<options>] (<URL>|<file>)…​ ``` Description ----------- This script tries, as much as possible, to display the URLs and FILEs that are passed as arguments, as HTML pages in new tabs on an already opened web browser. The following browsers (or commands) are currently supported: * firefox (this is the default under X Window when not using KDE) * iceweasel * seamonkey * iceape * chromium (also supported as chromium-browser) * google-chrome (also supported as chrome) * konqueror (this is the default under KDE, see `Note about konqueror` below) * opera * w3m (this is the default outside graphical environments) * elinks * links * lynx * dillo * open (this is the default under Mac OS X GUI) * start (this is the default under MinGW) * cygstart (this is the default under Cygwin) * xdg-open Custom commands may also be specified. Options ------- -b <browser> --browser=<browser> Use the specified browser. It must be in the list of supported browsers. -t <browser> --tool=<browser> Same as above. -c <conf.var> --config=<conf.var> CONF.VAR is looked up in the Git config files. If it’s set, then its value specifies the browser that should be used. Configuration variables ----------------------- ### CONF.VAR (from -c option) and web.browser The web browser can be specified using a configuration variable passed with the -c (or --config) command-line option, or the `web.browser` configuration variable if the former is not used. ### browser.<tool>.path You can explicitly provide a full path to your preferred browser by setting the configuration variable `browser.<tool>.path`. For example, you can configure the absolute path to firefox by setting `browser.firefox.path`. Otherwise, `git web--browse` assumes the tool is available in PATH. ### browser.<tool>.cmd When the browser, specified by options or configuration variables, is not among the supported ones, then the corresponding `browser.<tool>.cmd` configuration variable will be looked up. If this variable exists then `git web--browse` will treat the specified tool as a custom command and will use a shell eval to run the command with the URLs passed as arguments. Note about konqueror -------------------- When `konqueror` is specified by a command-line option or a configuration variable, we launch `kfmclient` to try to open the HTML man page on an already opened konqueror in a new tab if possible. For consistency, we also try such a trick if `browser.konqueror.path` is set to something like `A_PATH_TO/konqueror`. That means we will try to launch `A_PATH_TO/kfmclient` instead. If you really want to use `konqueror`, then you can use something like the following: ``` [web] browser = konq [browser "konq"] cmd = A_PATH_TO/konqueror ``` ### Note about git-config --global Note that these configuration variables should probably be set using the `--global` flag, for example like this: ``` $ git config --global web.browser firefox ``` as they are probably more user specific than repository specific. See [git-config[1]](git-config) for more information about this. git git-fsck git-fsck ======== Name ---- git-fsck - Verifies the connectivity and validity of the objects in the database Synopsis -------- ``` git fsck [--tags] [--root] [--unreachable] [--cache] [--no-reflogs] [--[no-]full] [--strict] [--verbose] [--lost-found] [--[no-]dangling] [--[no-]progress] [--connectivity-only] [--[no-]name-objects] [<object>…​] ``` Description ----------- Verifies the connectivity and validity of the objects in the database. Options ------- <object> An object to treat as the head of an unreachability trace. If no objects are given, `git fsck` defaults to using the index file, all SHA-1 references in `refs` namespace, and all reflogs (unless --no-reflogs is given) as heads. --unreachable Print out objects that exist but that aren’t reachable from any of the reference nodes. --[no-]dangling Print objects that exist but that are never `directly` used (default). `--no-dangling` can be used to omit this information from the output. --root Report root nodes. --tags Report tags. --cache Consider any object recorded in the index also as a head node for an unreachability trace. --no-reflogs Do not consider commits that are referenced only by an entry in a reflog to be reachable. This option is meant only to search for commits that used to be in a ref, but now aren’t, but are still in that corresponding reflog. --full Check not just objects in GIT\_OBJECT\_DIRECTORY ($GIT\_DIR/objects), but also the ones found in alternate object pools listed in GIT\_ALTERNATE\_OBJECT\_DIRECTORIES or $GIT\_DIR/objects/info/alternates, and in packed Git archives found in $GIT\_DIR/objects/pack and corresponding pack subdirectories in alternate object pools. This is now default; you can turn it off with --no-full. --connectivity-only Check only the connectivity of reachable objects, making sure that any objects referenced by a reachable tag, commit, or tree is present. This speeds up the operation by avoiding reading blobs entirely (though it does still check that referenced blobs exist). This will detect corruption in commits and trees, but not do any semantic checks (e.g., for format errors). Corruption in blob objects will not be detected at all. Unreachable tags, commits, and trees will also be accessed to find the tips of dangling segments of history. Use `--no-dangling` if you don’t care about this output and want to speed it up further. --strict Enable more strict checking, namely to catch a file mode recorded with g+w bit set, which was created by older versions of Git. Existing repositories, including the Linux kernel, Git itself, and sparse repository have old objects that triggers this check, but it is recommended to check new projects with this flag. --verbose Be chatty. --lost-found Write dangling objects into .git/lost-found/commit/ or .git/lost-found/other/, depending on type. If the object is a blob, the contents are written into the file, rather than its object name. --name-objects When displaying names of reachable objects, in addition to the SHA-1 also display a name that describes **how** they are reachable, compatible with [git-rev-parse[1]](git-rev-parse), e.g. `HEAD@{1234567890}~25^2:src/`. --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --no-progress or --verbose is specified. --progress forces progress status even if the standard error stream is not directed to a terminal. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: fsck.<msg-id> During fsck git may find issues with legacy data which wouldn’t be generated by current versions of git, and which wouldn’t be sent over the wire if `transfer.fsckObjects` was set. This feature is intended to support working with legacy repositories containing such data. Setting `fsck.<msg-id>` will be picked up by [git-fsck[1]](git-fsck), but to accept pushes of such data set `receive.fsck.<msg-id>` instead, or to clone or fetch it set `fetch.fsck.<msg-id>`. The rest of the documentation discusses `fsck.*` for brevity, but the same applies for the corresponding `receive.fsck.*` and `fetch.<msg-id>.*`. variables. Unlike variables like `color.ui` and `core.editor` the `receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>` variables will not fall back on the `fsck.<msg-id>` configuration if they aren’t set. To uniformly configure the same fsck settings in different circumstances all three of them they must all set to the same values. When `fsck.<msg-id>` is set, errors can be switched to warnings and vice versa by configuring the `fsck.<msg-id>` setting where the `<msg-id>` is the fsck message ID and the value is one of `error`, `warn` or `ignore`. For convenience, fsck prefixes the error/warning with the message ID, e.g. "missingEmail: invalid author/committer line - missing email" means that setting `fsck.missingEmail = ignore` will hide that issue. In general, it is better to enumerate existing objects with problems with `fsck.skipList`, instead of listing the kind of breakages these problematic objects share to be ignored, as doing the latter will allow new instances of the same breakages go unnoticed. Setting an unknown `fsck.<msg-id>` value will cause fsck to die, but doing the same for `receive.fsck.<msg-id>` and `fetch.fsck.<msg-id>` will only cause git to warn. See `Fsck Messages` section of [git-fsck[1]](git-fsck) for supported values of `<msg-id>`. fsck.skipList The path to a list of object names (i.e. one unabbreviated SHA-1 per line) that are known to be broken in a non-fatal way and should be ignored. On versions of Git 2.20 and later comments (`#`), empty lines, and any leading and trailing whitespace is ignored. Everything but a SHA-1 per line will error out on older versions. This feature is useful when an established project should be accepted despite early commits containing errors that can be safely ignored such as invalid committer email addresses. Note: corrupt objects cannot be skipped with this setting. Like `fsck.<msg-id>` this variable has corresponding `receive.fsck.skipList` and `fetch.fsck.skipList` variants. Unlike variables like `color.ui` and `core.editor` the `receive.fsck.skipList` and `fetch.fsck.skipList` variables will not fall back on the `fsck.skipList` configuration if they aren’t set. To uniformly configure the same fsck settings in different circumstances all three of them they must all set to the same values. Older versions of Git (before 2.20) documented that the object names list should be sorted. This was never a requirement, the object names could appear in any order, but when reading the list we tracked whether the list was sorted for the purposes of an internal binary search implementation, which could save itself some work with an already sorted list. Unless you had a humongous list there was no reason to go out of your way to pre-sort the list. After Git version 2.20 a hash implementation is used instead, so there’s now no reason to pre-sort the list. Discussion ---------- git-fsck tests SHA-1 and general object sanity, and it does full tracking of the resulting reachability and everything else. It prints out any corruption it finds (missing or bad objects), and if you use the `--unreachable` flag it will also print out objects that exist but that aren’t reachable from any of the specified head nodes (or the default set, as mentioned above). Any corrupt objects you will have to find in backups or other archives (i.e., you can just remove them and do an `rsync` with some other site in the hopes that somebody else has the object you have corrupted). If core.commitGraph is true, the commit-graph file will also be inspected using `git commit-graph verify`. See [git-commit-graph[1]](git-commit-graph). Extracted diagnostics --------------------- unreachable <type> <object> The <type> object <object>, isn’t actually referred to directly or indirectly in any of the trees or commits seen. This can mean that there’s another root node that you’re not specifying or that the tree is corrupt. If you haven’t missed a root node then you might as well delete unreachable nodes since they can’t be used. missing <type> <object> The <type> object <object>, is referred to but isn’t present in the database. dangling <type> <object> The <type> object <object>, is present in the database but never `directly` used. A dangling commit could be a root node. hash mismatch <object> The database has an object whose hash doesn’t match the object database value. This indicates a serious data integrity problem. Fsck messages ------------- The following lists the types of errors `git fsck` detects and what each error means, with their default severity. The severity of the error, other than those that are marked as "(FATAL)", can be tweaked by setting the corresponding `fsck.<msg-id>` configuration variable. `badDate` (ERROR) Invalid date format in an author/committer line. `badDateOverflow` (ERROR) Invalid date value in an author/committer line. `badEmail` (ERROR) Invalid email format in an author/committer line. `badFilemode` (INFO) A tree contains a bad filemode entry. `badName` (ERROR) An author/committer name is empty. `badObjectSha1` (ERROR) An object has a bad sha1. `badParentSha1` (ERROR) A commit object has a bad parent sha1. `badTagName` (INFO) A tag has an invalid format. `badTimezone` (ERROR) Found an invalid time zone in an author/committer line. `badTree` (ERROR) A tree cannot be parsed. `badTreeSha1` (ERROR) A tree has an invalid format. `badType` (ERROR) Found an invalid object type. `duplicateEntries` (ERROR) A tree contains duplicate file entries. `emptyName` (WARN) A path contains an empty name. `extraHeaderEntry` (IGNORE) Extra headers found after `tagger`. `fullPathname` (WARN) A path contains the full path starting with "/". `gitattributesSymlink` (INFO) `.gitattributes` is a symlink. `gitignoreSymlink` (INFO) `.gitignore` is a symlink. `gitmodulesBlob` (ERROR) A non-blob found at `.gitmodules`. `gitmodulesLarge` (ERROR) The `.gitmodules` file is too large to parse. `gitmodulesMissing` (ERROR) Unable to read `.gitmodules` blob. `gitmodulesName` (ERROR) A submodule name is invalid. `gitmodulesParse` (INFO) Could not parse `.gitmodules` blob. `gitmodulesLarge`; (ERROR) `.gitmodules` blob is too large to parse. `gitmodulesPath` (ERROR) `.gitmodules` path is invalid. `gitmodulesSymlink` (ERROR) `.gitmodules` is a symlink. `gitmodulesUpdate` (ERROR) Found an invalid submodule update setting. `gitmodulesUrl` (ERROR) Found an invalid submodule url. `hasDot` (WARN) A tree contains an entry named `.`. `hasDotdot` (WARN) A tree contains an entry named `..`. `hasDotgit` (WARN) A tree contains an entry named `.git`. `mailmapSymlink` (INFO) `.mailmap` is a symlink. `missingAuthor` (ERROR) Author is missing. `missingCommitter` (ERROR) Committer is missing. `missingEmail` (ERROR) Email is missing in an author/committer line. `missingNameBeforeEmail` (ERROR) Missing name before an email in an author/committer line. `missingObject` (ERROR) Missing `object` line in tag object. `missingSpaceBeforeDate` (ERROR) Missing space before date in an author/committer line. `missingSpaceBeforeEmail` (ERROR) Missing space before the email in author/committer line. `missingTag` (ERROR) Unexpected end after `type` line in a tag object. `missingTagEntry` (ERROR) Missing `tag` line in a tag object. `missingTaggerEntry` (INFO) Missing `tagger` line in a tag object. `missingTree` (ERROR) Missing `tree` line in a commit object. `missingType` (ERROR) Invalid type value on the `type` line in a tag object. `missingTypeEntry` (ERROR) Missing `type` line in a tag object. `multipleAuthors` (ERROR) Multiple author lines found in a commit. `nulInCommit` (WARN) Found a NUL byte in the commit object body. `nulInHeader` (FATAL) NUL byte exists in the object header. `nullSha1` (WARN) Tree contains entries pointing to a null sha1. `treeNotSorted` (ERROR) A tree is not properly sorted. `unknownType` (ERROR) Found an unknown object type. `unterminatedHeader` (FATAL) Missing end-of-line in the object header. `zeroPaddedDate` (ERROR) Found a zero padded date in an author/commiter line. `zeroPaddedFilemode` (WARN) Found a zero padded filemode in a tree. Environment variables --------------------- GIT\_OBJECT\_DIRECTORY used to specify the object database root (usually $GIT\_DIR/objects) GIT\_INDEX\_FILE used to specify the index file of the index GIT\_ALTERNATE\_OBJECT\_DIRECTORIES used to specify additional object database roots (usually unset) git git-whatchanged git-whatchanged =============== Name ---- git-whatchanged - Show logs with difference each commit introduces Synopsis -------- ``` git whatchanged <option>…​ ``` Description ----------- Shows commit logs and diff output each commit introduces. New users are encouraged to use [git-log[1]](git-log) instead. The `whatchanged` command is essentially the same as [git-log[1]](git-log) but defaults to show the raw format diff output and to skip merges. The command is kept primarily for historical reasons; fingers of many people who learned Git long before `git log` was invented by reading Linux kernel mailing list are trained to type it. Examples -------- `git whatchanged -p v2.6.12.. include/scsi drivers/scsi` Show as patches the commits since version `v2.6.12` that changed any file in the include/scsi or drivers/scsi subdirectories `git whatchanged --since="2 weeks ago" -- gitk` Show the changes during the last two weeks to the file `gitk`. The "--" is necessary to avoid confusion with the **branch** named `gitk` git git-tag git-tag ======= Name ---- git-tag - Create, list, delete or verify a tag object signed with GPG Synopsis -------- ``` git tag [-a | -s | -u <key-id>] [-f] [-m <msg> | -F <file>] [-e] <tagname> [<commit> | <object>] git tag -d <tagname>…​ git tag [-n[<num>]] -l [--contains <commit>] [--no-contains <commit>] [--points-at <object>] [--column[=<options>] | --no-column] [--create-reflog] [--sort=<key>] [--format=<format>] [--merged <commit>] [--no-merged <commit>] [<pattern>…​] git tag -v [--format=<format>] <tagname>…​ ``` Description ----------- Add a tag reference in `refs/tags/`, unless `-d/-l/-v` is given to delete, list or verify tags. Unless `-f` is given, the named tag must not yet exist. If one of `-a`, `-s`, or `-u <key-id>` is passed, the command creates a `tag` object, and requires a tag message. Unless `-m <msg>` or `-F <file>` is given, an editor is started for the user to type in the tag message. If `-m <msg>` or `-F <file>` is given and `-a`, `-s`, and `-u <key-id>` are absent, `-a` is implied. Otherwise, a tag reference that points directly at the given object (i.e., a lightweight tag) is created. A GnuPG signed tag object will be created when `-s` or `-u <key-id>` is used. When `-u <key-id>` is not used, the committer identity for the current user is used to find the GnuPG key for signing. The configuration variable `gpg.program` is used to specify custom GnuPG binary. Tag objects (created with `-a`, `-s`, or `-u`) are called "annotated" tags; they contain a creation date, the tagger name and e-mail, a tagging message, and an optional GnuPG signature. Whereas a "lightweight" tag is simply a name for an object (usually a commit object). Annotated tags are meant for release while lightweight tags are meant for private or temporary object labels. For this reason, some git commands for naming objects (like `git describe`) will ignore lightweight tags by default. Options ------- -a --annotate Make an unsigned, annotated tag object -s --sign Make a GPG-signed tag, using the default e-mail address’s key. The default behavior of tag GPG-signing is controlled by `tag.gpgSign` configuration variable if it exists, or disabled otherwise. See [git-config[1]](git-config). --no-sign Override `tag.gpgSign` configuration variable that is set to force each and every tag to be signed. -u <key-id> --local-user=<key-id> Make a GPG-signed tag, using the given key. -f --force Replace an existing tag with the given name (instead of failing) -d --delete Delete existing tags with the given names. -v --verify Verify the GPG signature of the given tag names. -n<num> <num> specifies how many lines from the annotation, if any, are printed when using -l. Implies `--list`. The default is not to print any annotation lines. If no number is given to `-n`, only the first line is printed. If the tag is not annotated, the commit message is displayed instead. -l --list List tags. With optional `<pattern>...`, e.g. `git tag --list 'v-*'`, list only the tags that match the pattern(s). Running "git tag" without arguments also lists all tags. The pattern is a shell wildcard (i.e., matched using fnmatch(3)). Multiple patterns may be given; if any of them matches, the tag is shown. This option is implicitly supplied if any other list-like option such as `--contains` is provided. See the documentation for each of those options for details. --sort=<key> Sort based on the key given. Prefix `-` to sort in descending order of the value. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key. Also supports "version:refname" or "v:refname" (tag names are treated as versions). The "version:refname" sort order can also be affected by the "versionsort.suffix" configuration variable. The keys supported are the same as those in `git for-each-ref`. Sort order defaults to the value configured for the `tag.sort` variable if it exists, or lexicographic order otherwise. See [git-config[1]](git-config). --color[=<when>] Respect any colors specified in the `--format` option. The `<when>` field must be one of `always`, `never`, or `auto` (if `<when>` is absent, behave as if `always` was given). -i --ignore-case Sorting and filtering tags are case insensitive. --column[=<options>] --no-column Display tag listing in columns. See configuration variable `column.tag` for option syntax. `--column` and `--no-column` without options are equivalent to `always` and `never` respectively. This option is only applicable when listing tags without annotation lines. --contains [<commit>] Only list tags which contain the specified commit (HEAD if not specified). Implies `--list`. --no-contains [<commit>] Only list tags which don’t contain the specified commit (HEAD if not specified). Implies `--list`. --merged [<commit>] Only list tags whose commits are reachable from the specified commit (`HEAD` if not specified). --no-merged [<commit>] Only list tags whose commits are not reachable from the specified commit (`HEAD` if not specified). --points-at <object> Only list tags of the given object (HEAD if not specified). Implies `--list`. -m <msg> --message=<msg> Use the given tag message (instead of prompting). If multiple `-m` options are given, their values are concatenated as separate paragraphs. Implies `-a` if none of `-a`, `-s`, or `-u <key-id>` is given. -F <file> --file=<file> Take the tag message from the given file. Use `-` to read the message from the standard input. Implies `-a` if none of `-a`, `-s`, or `-u <key-id>` is given. -e --edit The message taken from file with `-F` and command line with `-m` are usually used as the tag message unmodified. This option lets you further edit the message taken from these sources. --cleanup=<mode> This option sets how the tag message is cleaned up. The `<mode>` can be one of `verbatim`, `whitespace` and `strip`. The `strip` mode is default. The `verbatim` mode does not change message at all, `whitespace` removes just leading/trailing whitespace lines and `strip` removes both whitespace and commentary. --create-reflog Create a reflog for the tag. To globally enable reflogs for tags, see `core.logAllRefUpdates` in [git-config[1]](git-config). The negated form `--no-create-reflog` only overrides an earlier `--create-reflog`, but currently does not negate the setting of `core.logAllRefUpdates`. --format=<format> A string that interpolates `%(fieldname)` from a tag ref being shown and the object it points at. The format is the same as that of [git-for-each-ref[1]](git-for-each-ref). When unspecified, defaults to `%(refname:strip=2)`. <tagname> The name of the tag to create, delete, or describe. The new tag name must pass all checks defined by [git-check-ref-format[1]](git-check-ref-format). Some of these checks may restrict the characters allowed in a tag name. <commit> <object> The object that the new tag will refer to, usually a commit. Defaults to HEAD. Configuration ------------- By default, `git tag` in sign-with-default mode (-s) will use your committer identity (of the form `Your Name <[email protected]>`) to find a key. If you want to use a different default key, you can specify it in the repository configuration as follows: ``` [user] signingKey = <gpg-key_id> ``` `pager.tag` is only respected when listing tags, i.e., when `-l` is used or implied. The default is to use a pager. See [git-config[1]](git-config). Discussion ---------- ### On Re-tagging What should you do when you tag a wrong commit and you would want to re-tag? If you never pushed anything out, just re-tag it. Use "-f" to replace the old one. And you’re done. But if you have pushed things out (or others could just read your repository directly), then others will have already seen the old tag. In that case you can do one of two things: 1. The sane thing. Just admit you screwed up, and use a different name. Others have already seen one tag-name, and if you keep the same name, you may be in the situation that two people both have "version X", but they actually have `different` "X"'s. So just call it "X.1" and be done with it. 2. The insane thing. You really want to call the new version "X" too, `even though` others have already seen the old one. So just use `git tag -f` again, as if you hadn’t already published the old one. However, Git does **not** (and it should not) change tags behind users back. So if somebody already got the old tag, doing a `git pull` on your tree shouldn’t just make them overwrite the old one. If somebody got a release tag from you, you cannot just change the tag for them by updating your own one. This is a big security issue, in that people MUST be able to trust their tag-names. If you really want to do the insane thing, you need to just fess up to it, and tell people that you messed up. You can do that by making a very public announcement saying: ``` Ok, I messed up, and I pushed out an earlier version tagged as X. I then fixed something, and retagged the *fixed* tree as X again. If you got the wrong tag, and want the new one, please delete the old one and fetch the new one by doing: git tag -d X git fetch origin tag X to get my updated tag. You can test which tag you have by doing git rev-parse X which should return 0123456789abcdef.. if you have the new version. Sorry for the inconvenience. ``` Does this seem a bit complicated? It **should** be. There is no way that it would be correct to just "fix" it automatically. People need to know that their tags might have been changed. ### On Automatic following If you are following somebody else’s tree, you are most likely using remote-tracking branches (eg. `refs/remotes/origin/master`). You usually want the tags from the other end. On the other hand, if you are fetching because you would want a one-shot merge from somebody else, you typically do not want to get tags from there. This happens more often for people near the toplevel but not limited to them. Mere mortals when pulling from each other do not necessarily want to automatically get private anchor point tags from the other person. Often, "please pull" messages on the mailing list just provide two pieces of information: a repo URL and a branch name; this is designed to be easily cut&pasted at the end of a `git fetch` command line: ``` Linus, please pull from git://git..../proj.git master to get the following updates... ``` becomes: ``` $ git pull git://git..../proj.git master ``` In such a case, you do not want to automatically follow the other person’s tags. One important aspect of Git is its distributed nature, which largely means there is no inherent "upstream" or "downstream" in the system. On the face of it, the above example might seem to indicate that the tag namespace is owned by the upper echelon of people and that tags only flow downwards, but that is not the case. It only shows that the usage pattern determines who are interested in whose tags. A one-shot pull is a sign that a commit history is now crossing the boundary between one circle of people (e.g. "people who are primarily interested in the networking part of the kernel") who may have their own set of tags (e.g. "this is the third release candidate from the networking group to be proposed for general consumption with 2.6.21 release") to another circle of people (e.g. "people who integrate various subsystem improvements"). The latter are usually not interested in the detailed tags used internally in the former group (that is what "internal" means). That is why it is desirable not to follow tags automatically in this case. It may well be that among networking people, they may want to exchange the tags internal to their group, but in that workflow they are most likely tracking each other’s progress by having remote-tracking branches. Again, the heuristic to automatically follow such tags is a good thing. ### On Backdating Tags If you have imported some changes from another VCS and would like to add tags for major releases of your work, it is useful to be able to specify the date to embed inside of the tag object; such data in the tag object affects, for example, the ordering of tags in the gitweb interface. To set the date used in future tag objects, set the environment variable GIT\_COMMITTER\_DATE (see the later discussion of possible values; the most common form is "YYYY-MM-DD HH:MM"). For example: ``` $ GIT_COMMITTER_DATE="2006-10-02 10:31" git tag -s v1.0.1 ``` Date formats ------------ The `GIT_AUTHOR_DATE` and `GIT_COMMITTER_DATE` environment variables support the following date formats: Git internal format It is `<unix-timestamp> <time-zone-offset>`, where `<unix-timestamp>` is the number of seconds since the UNIX epoch. `<time-zone-offset>` is a positive or negative offset from UTC. For example CET (which is 1 hour ahead of UTC) is `+0100`. RFC 2822 The standard email format as described by RFC 2822, for example `Thu, 07 Apr 2005 22:13:13 +0200`. ISO 8601 Time and date specified by the ISO 8601 standard, for example `2005-04-07T22:13:13`. The parser accepts a space instead of the `T` character as well. Fractional parts of a second will be ignored, for example `2005-04-07T22:13:13.019` will be treated as `2005-04-07T22:13:13`. | | | | --- | --- | | Note | In addition, the date part is accepted in the following formats: `YYYY.MM.DD`, `MM/DD/YYYY` and `DD.MM.YYYY`. | Notes ----- When combining multiple `--contains` and `--no-contains` filters, only references that contain at least one of the `--contains` commits and contain none of the `--no-contains` commits are shown. When combining multiple `--merged` and `--no-merged` filters, only references that are reachable from at least one of the `--merged` commits and from none of the `--no-merged` commits are shown. See also -------- [git-check-ref-format[1]](git-check-ref-format). [git-config[1]](git-config).
programming_docs
git git-diff-tree git-diff-tree ============= Name ---- git-diff-tree - Compares the content and mode of blobs found via two tree objects Synopsis -------- ``` git diff-tree [--stdin] [-m] [-s] [-v] [--no-commit-id] [--pretty] [-t] [-r] [-c | --cc] [--combined-all-paths] [--root] [--merge-base] [<common-diff-options>] <tree-ish> [<tree-ish>] [<path>…​] ``` Description ----------- Compares the content and mode of the blobs found via two tree objects. If there is only one <tree-ish> given, the commit is compared with its parents (see --stdin below). Note that `git diff-tree` can use the tree encapsulated in a commit object. Options ------- -p -u --patch Generate patch (see section on generating patches). -s --no-patch Suppress diff output. Useful for commands like `git show` that show the patch by default, or to cancel the effect of `--patch`. -U<n> --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies `--patch`. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char> --output-indicator-old=<char> --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are `+`, `-` and ' ' respectively. --raw Generate the diff in raw format. This is the default. --patch-with-raw Synonym for `-p --raw`. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the `diff.algorithm` variable to a non-default value and want to use the default one, then you have to use `--diff-algorithm=default` option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by `<width>`. The width of the filename part can be limited by giving another width `<name-width>` after a comma. The width of the graph part can be limited by using `--stat-graph-width=<width>` (affects all commands generating a stat graph) or by setting `diff.statGraphWidth=<width>` (does not affect `git format-patch`). By giving a third parameter `<count>`, you can limit the output to the first `<count>` lines, followed by `...` if there are more. These parameters can also be set individually with `--stat-width=<width>`, `--stat-name-width=<name-width>` and `--stat-count=<count>`. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies `--stat`. --numstat Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. --shortstat Output only the last line of the `--stat` format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,…​>] --dirstat[=<param1,param2,…​>] Output the distribution of relative amount of changes for each sub-directory. The behavior of `--dirstat` can be customized by passing it a comma separated list of parameters. The defaults are controlled by the `diff.dirstat` configuration variable (see [git-config[1]](git-config)). The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `--dirstat=files,10,cumulative`. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>…​] Synonym for --dirstat=files,param1,param2…​ --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for `-p --stat`. -z When `--raw`, `--numstat`, `--name-only` or `--name-status` has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the [git-log[1]](git-log) manual page. --name-status Show only names and status of changed files. See the description of the `--diff-filter` option on what the status letters mean. Just like `--name-only` the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying `--submodule=short` the `short` format is used. This format just shows the names of the commits at the beginning and end of the range. When `--submodule` or `--submodule=log` is specified, the `log` format is used. This format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. When `--submodule=diff` is specified, the `diff` format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to `diff.submodule` or the `short` format if the config option is unset. --color[=<when>] Show colored diff. `--color` (i.e. without `=<when>`) is the same as `--color=always`. `<when>` can be one of `always`, `never`, or `auto`. --no-color Turn off colored diff. It is the same as `--color=never`. --color-moved[=<mode>] Moved lines of code are colored differently. The <mode> defaults to `no` if the option is not given and to `zebra` if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for `zebra`. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with `color.diff.newMoved`. Similarly `color.diff.oldMoved` will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the `color.diff.{old,new}Moved` color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in `blocks` mode. The blocks are painted using either the `color.diff.{old,new}Moved` color or `color.diff.{old,new}MovedAlternative`. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to `zebra`, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. `dimmed_zebra` is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as `--color-moved=no`. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for `--color-moved`. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as `--color-moved-ws=no`. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see `--word-diff-regex` below. The <mode> defaults to `plain`, and must be one of: color Highlight changed words using only colors. Implies `--color`. plain Show words as `[-removed-]` and `{+added+}`. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a `+`/`-`/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde `~` on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies `--word-diff` unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append `|[^[:space:]]` to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, `--word-diff-regex=.` will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see [gitattributes[5]](gitattributes) or [git-config[1]](git-config). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to `--word-diff=color` plus (if a regex was specified) `--word-diff-regex=<regex>`. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by `core.whitespace` configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. When this option is not given, and the configuration variable `diff.wsErrorHighlight` is not set, only whitespace errors in `new` lines are highlighted. The whitespace errors are colored with `color.diff.whitespace`. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to `--full-index`, output a binary diff that can be applied with `git-apply`. Implies `--patch`. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. In diff-patch output format, `--full-index` takes higher precedence, i.e. if `--full-index` is specified, full blob names will be shown regardless of `--abbrev`. Non default number of digits can be specified with `--abbrev=<n>`. -B[<n>][/<m>] --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number `m` controls this aspect of the -B option (defaults to 60%). `-B/70%` specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number `n` controls this aspect of the -B option (defaults to 50%). `-B20%` specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>] --find-renames[=<n>] Detect renames. If `n` is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, `-M90%` means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a `%` sign, the number is to be read as a fraction, with a decimal point before it. I.e., `-M5` becomes 0.5, and is thus the same as `-M50%`. Similarly, `-M05` is the same as `-M5%`. To limit detection to exact renames, use `-M100%`. The default similarity index is 50%. -C[<n>] --find-copies[=<n>] Detect copies as well as renames. See also `--find-copies-harder`. If `n` is specified, it has the same meaning as for `-M<n>`. --find-copies-harder For performance reasons, by default, `-C` option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one `-C` option has the same effect. -D --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and `/dev/null`. The resulting patch is not meant to be applied with `patch` or `git apply`; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with `-B`, omit also the preimage in the deletion part of a delete/create pair. -l<num> The `-M` and `-C` options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)…​[\*]] Select only files that are Added (`A`), Copied (`C`), Deleted (`D`), Modified (`M`), Renamed (`R`), have their type (i.e. regular file, symlink, submodule, …​) changed (`T`), are Unmerged (`U`), are Unknown (`X`), or have had their pairing Broken (`B`). Any combination of the filter characters (including none) can be used. When `*` (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. `--diff-filter=ad` excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into `-S`, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between `-S<regex> --pickaxe-regex` and `-G<regex>`, consider a commit with the following diff in the same file: ``` + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); ``` While `git log -G"frotz\(nitfol"` will show this commit, `git log -S"frotz\(nitfol" --pickaxe-regex` will not (because the number of occurrences of that string did not change). Unless `--text` is supplied patches of binary files without a textconv filter will be ignored. See the `pickaxe` entry in [gitdiffcore[7]](gitdiffcore) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to `-S`, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the `-t` option in `git-log` to also find trees. --pickaxe-all When `-S` or `-G` finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to `-S` as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the `diff.orderFile` configuration variable (see [git-config[1]](git-config)). To cancel `diff.orderFile`, use `-O/dev/null`. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: * Blank lines are ignored, so they can be used as separators for readability. * Lines starting with a hash ("`#`") are ignored, so they can be used for comments. Add a backslash ("`\`") to the beginning of the pattern if it starts with a hash. * Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM\_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "`foo*bar`" matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`". --skip-to=<file> --rotate-to=<file> Discard the files before the named <file> from the output (i.e. `skip to`), or move them to the end of the output (i.e. `rotate to`). These were invented primarily for use of the `git difftool` command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>] --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. `--no-relative` can be used to countermand both `diff.relative` config option and previous `--relative`. -a --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex> --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to `diff.interHunkContext` or 0 if the config option is unset. -W --function-context Show whole function as context lines for each change. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies `--exit-code`. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with [gitattributes[5]](gitattributes), you need to use this option with [git-log[1]](git-log) and friends. --no-ext-diff Disallow external diff drivers. --textconv --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See [gitattributes[5]](gitattributes) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for [git-diff[1]](git-diff) and [git-log[1]](git-log), but not for [git-format-patch[1]](git-format-patch) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the `ignore` option in [git-config[1]](git-config) or [gitmodules[5]](gitmodules). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with `--ita-visible-in-index`. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also [gitdiffcore[7]](gitdiffcore). <tree-ish> The id of a tree object. <path>…​ If provided, the results are limited to a subset of files matching one of the provided pathspecs. -r recurse into sub-trees -t show tree entry itself as well as subtrees. Implies -r. --root When `--root` is specified the initial commit will be shown as a big creation event. This is equivalent to a diff against the NULL tree. --merge-base Instead of comparing the <tree-ish>s directly, use the merge base between the two <tree-ish>s as the "before" side. There must be two <tree-ish>s given and they must both be commits. --stdin When `--stdin` is specified, the command does not take <tree-ish> arguments from the command line. Instead, it reads lines containing either two <tree>, one <commit>, or a list of <commit> from its standard input. (Use a single space as separator.) When two trees are given, it compares the first tree with the second. When a single commit is given, it compares the commit with its parents. The remaining commits, when given, are used as if they are parents of the first commit. When comparing two trees, the ID of both trees (separated by a space and terminated by a newline) is printed before the difference. When comparing commits, the ID of the first (or only) commit, followed by a newline, is printed. The following flags further affect the behavior when comparing commits (but not trees). -m By default, `git diff-tree --stdin` does not show differences for merge commits. With this flag, it shows differences to that commit from all of its parents. See also `-c`. -s By default, `git diff-tree --stdin` shows differences, either in machine-readable form (without `-p`) or in patch form (with `-p`). This output can be suppressed. It is only useful with `-v` flag. -v This flag causes `git diff-tree --stdin` to also show the commit message before the differences. --pretty[=<format>] --format=<format> Pretty-print the contents of the commit logs in a given format, where `<format>` can be one of `oneline`, `short`, `medium`, `full`, `fuller`, `reference`, `email`, `raw`, `format:<string>` and `tformat:<string>`. When `<format>` is none of the above, and has `%placeholder` in it, it acts as if `--pretty=tformat:<format>` were given. See the "PRETTY FORMATS" section for some additional details for each format. When `=<format>` part is omitted, it defaults to `medium`. Note: you can specify the default pretty format in the repository configuration (see [git-config[1]](git-config)). --abbrev-commit Instead of showing the full 40-byte hexadecimal commit object name, show a prefix that names the object uniquely. "--abbrev=<n>" (which also modifies diff output, if it is displayed) option can be used to specify the minimum length of the prefix. This should make "--pretty=oneline" a whole lot more readable for people using 80-column terminals. --no-abbrev-commit Show the full 40-byte hexadecimal commit object name. This negates `--abbrev-commit`, either explicit or implied by other options such as "--oneline". It also overrides the `log.abbrevCommit` variable. --oneline This is a shorthand for "--pretty=oneline --abbrev-commit" used together. --encoding=<encoding> Commit objects record the character encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user. For non plumbing commands this defaults to UTF-8. Note that if an object claims to be encoded in `X` and we are outputting in `X`, we will output the object verbatim; this means that invalid sequences in the original commit may be copied to the output. Likewise, if iconv(3) fails to convert the commit, we will quietly output the original object verbatim. --expand-tabs=<n> --expand-tabs --no-expand-tabs Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of `<n>`) in the log message before showing it in the output. `--expand-tabs` is a short-hand for `--expand-tabs=8`, and `--no-expand-tabs` is a short-hand for `--expand-tabs=0`, which disables tab expansion. By default, tabs are expanded in pretty formats that indent the log message by 4 spaces (i.e. `medium`, which is the default, `full`, and `fuller`). --notes[=<ref>] Show the notes (see [git-notes[1]](git-notes)) that annotate the commit, when showing the commit log message. This is the default for `git log`, `git show` and `git whatchanged` commands when there is no `--pretty`, `--format`, or `--oneline` option given on the command line. By default, the notes shown are from the notes refs listed in the `core.notesRef` and `notes.displayRef` variables (or corresponding environment overrides). See [git-config[1]](git-config) for more details. With an optional `<ref>` argument, use the ref to find the notes to display. The ref can specify the full refname when it begins with `refs/notes/`; when it begins with `notes/`, `refs/` and otherwise `refs/notes/` is prefixed to form a full name of the ref. Multiple --notes options can be combined to control which notes are being displayed. Examples: "--notes=foo" will show only notes from "refs/notes/foo"; "--notes=foo --notes" will show both notes from "refs/notes/foo" and from the default notes ref(s). --no-notes Do not show notes. This negates the above `--notes` option, by resetting the list of notes refs from which notes are shown. Options are parsed in the order given on the command line, so e.g. "--notes --notes=foo --no-notes --notes=bar" will only show notes from "refs/notes/bar". --show-notes[=<ref>] --[no-]standard-notes These options are deprecated. Use the above --notes/--no-notes options instead. --show-signature Check the validity of a signed commit object by passing the signature to `gpg --verify` and show the output. --no-commit-id `git diff-tree` outputs a line with the commit ID when applicable. This flag suppressed the commit ID output. -c This flag changes the way a merge commit is displayed (which means it is useful only when the command is given one <tree-ish>, or `--stdin`). It shows the differences from each of the parents to the merge result simultaneously instead of showing pairwise diff between a parent and the result one at a time (which is what the `-m` option does). Furthermore, it lists only files which were modified from all parents. --cc This flag changes the way a merge commit patch is displayed, in a similar way to the `-c` option. It implies the `-c` and `-p` options and further compresses the patch output by omitting uninteresting hunks whose the contents in the parents have only two variants and the merge result picks one of them without modification. When all hunks are uninteresting, the commit itself and the commit log message is not shown, just like in any other "empty diff" case. --combined-all-paths This flag causes combined diffs (used for merge commits) to list the name of the file from all parents. It thus only has effect when -c or --cc are specified, and is likely only useful if filename changes are detected (i.e. when either rename or copy detection have been requested). --always Show the commit itself and the commit log message even if the diff itself is empty. Pretty formats -------------- If the commit is a merge, and if the pretty-format is not `oneline`, `email` or `raw`, an additional line is inserted before the `Author:` line. This line begins with "Merge: " and the hashes of ancestral commits are printed, separated by spaces. Note that the listed commits may not necessarily be the list of the **direct** parent commits if you have limited your view of history: for example, if you are only interested in changes related to a certain directory or file. There are several built-in formats, and you can define additional formats by setting a pretty.<name> config option to either another format name, or a `format:` string, as described below (see [git-config[1]](git-config)). Here are the details of the built-in formats: * `oneline` ``` <hash> <title-line> ``` This is designed to be as compact as possible. * `short` ``` commit <hash> Author: <author> ``` ``` <title-line> ``` * `medium` ``` commit <hash> Author: <author> Date: <author-date> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `full` ``` commit <hash> Author: <author> Commit: <committer> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `fuller` ``` commit <hash> Author: <author> AuthorDate: <author-date> Commit: <committer> CommitDate: <committer-date> ``` ``` <title-line> ``` ``` <full-commit-message> ``` * `reference` ``` <abbrev-hash> (<title-line>, <short-author-date>) ``` This format is used to refer to another commit in a commit message and is the same as `--pretty='format:%C(auto)%h (%s, %ad)'`. By default, the date is formatted with `--date=short` unless another `--date` option is explicitly specified. As with any `format:` with format placeholders, its output is not affected by other options like `--decorate` and `--walk-reflogs`. * `email` ``` From <hash> <date> From: <author> Date: <author-date> Subject: [PATCH] <title-line> ``` ``` <full-commit-message> ``` * `mboxrd` Like `email`, but lines in the commit message starting with "From " (preceded by zero or more ">") are quoted with ">" so they aren’t confused as starting a new commit. * `raw` The `raw` format shows the entire commit exactly as stored in the commit object. Notably, the hashes are displayed in full, regardless of whether --abbrev or --no-abbrev are used, and `parents` information show the true parent commits, without taking grafts or history simplification into account. Note that this format affects the way commits are displayed, but not the way the diff is shown e.g. with `git log --raw`. To get full object names in a raw diff format, use `--no-abbrev`. * `format:<format-string>` The `format:<format-string>` format allows you to specify which information you want to show. It works a little bit like printf format, with the notable exception that you get a newline with `%n` instead of `\n`. E.g, `format:"The author of %h was %an, %ar%nThe title was >>%s<<%n"` would show something like this: ``` The author of fe6e0ee was Junio C Hamano, 23 hours ago The title was >>t4119: test autocomputing -p<n> for traditional diff input.<< ``` The placeholders are: + Placeholders that expand to a single literal character: *%n* newline *%%* a raw `%` *%x00* print a byte from a hex code + Placeholders that affect formatting of later placeholders: *%Cred* switch color to red *%Cgreen* switch color to green *%Cblue* switch color to blue *%Creset* reset color *%C(…​)* color specification, as described under Values in the "CONFIGURATION FILE" section of [git-config[1]](git-config). By default, colors are shown only when enabled for log output (by `color.diff`, `color.ui`, or `--color`, and respecting the `auto` settings of the former if we are going to a terminal). `%C(auto,...)` is accepted as a historical synonym for the default (e.g., `%C(auto,red)`). Specifying `%C(always,...)` will show the colors even when color is not otherwise enabled (though consider just using `--color=always` to enable color for the whole output, including this format and anything else git might color). `auto` alone (i.e. `%C(auto)`) will turn on auto coloring on the next placeholders until the color is switched again. *%m* left (`<`), right (`>`) or boundary (`-`) mark *%w([<w>[,<i1>[,<i2>]]])* switch line wrapping, like the -w option of [git-shortlog[1]](git-shortlog). *%<(<N>[,trunc|ltrunc|mtrunc])* make the next placeholder take at least N columns, padding spaces on the right if necessary. Optionally truncate at the beginning (ltrunc), the middle (mtrunc) or the end (trunc) if the output is longer than N columns. Note that truncating only works correctly with N >= 2. *%<|(<N>)* make the next placeholder take at least until Nth columns, padding spaces on the right if necessary *%>(<N>)*, *%>|(<N>)* similar to `%<(<N>)`, `%<|(<N>)` respectively, but padding spaces on the left *%>>(<N>)*, *%>>|(<N>)* similar to `%>(<N>)`, `%>|(<N>)` respectively, except that if the next placeholder takes more spaces than given and there are spaces on its left, use those spaces *%><(<N>)*, *%><|(<N>)* similar to `%<(<N>)`, `%<|(<N>)` respectively, but padding both sides (i.e. the text is centered) + Placeholders that expand to information extracted from the commit: *%H* commit hash *%h* abbreviated commit hash *%T* tree hash *%t* abbreviated tree hash *%P* parent hashes *%p* abbreviated parent hashes *%an* author name *%aN* author name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ae* author email *%aE* author email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%al* author email local-part (the part before the `@` sign) *%aL* author local-part (see `%al`) respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ad* author date (format respects --date= option) *%aD* author date, RFC2822 style *%ar* author date, relative *%at* author date, UNIX timestamp *%ai* author date, ISO 8601-like format *%aI* author date, strict ISO 8601 format *%as* author date, short format (`YYYY-MM-DD`) *%ah* author date, human style (like the `--date=human` option of [git-rev-list[1]](git-rev-list)) *%cn* committer name *%cN* committer name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ce* committer email *%cE* committer email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%cl* committer email local-part (the part before the `@` sign) *%cL* committer local-part (see `%cl`) respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%cd* committer date (format respects --date= option) *%cD* committer date, RFC2822 style *%cr* committer date, relative *%ct* committer date, UNIX timestamp *%ci* committer date, ISO 8601-like format *%cI* committer date, strict ISO 8601 format *%cs* committer date, short format (`YYYY-MM-DD`) *%ch* committer date, human style (like the `--date=human` option of [git-rev-list[1]](git-rev-list)) *%d* ref names, like the --decorate option of [git-log[1]](git-log) *%D* ref names without the " (", ")" wrapping. *%(describe[:options])* human-readable name, like [git-describe[1]](git-describe); empty string for undescribable commits. The `describe` string may be followed by a colon and zero or more comma-separated options. Descriptions can be inconsistent when tags are added or removed at the same time. - `tags[=<bool-value>]`: Instead of only considering annotated tags, consider lightweight tags as well. - `abbrev=<number>`: Instead of using the default number of hexadecimal digits (which will vary according to the number of objects in the repository with a default of 7) of the abbreviated object name, use <number> digits, or as many digits as needed to form a unique object name. - `match=<pattern>`: Only consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. - `exclude=<pattern>`: Do not consider tags matching the given `glob(7)` pattern, excluding the "refs/tags/" prefix. *%S* ref name given on the command line by which the commit was reached (like `git log --source`), only works with `git log` *%e* encoding *%s* subject *%f* sanitized subject line, suitable for a filename *%b* body *%B* raw body (unwrapped subject and body) *%N* commit notes *%GG* raw verification message from GPG for a signed commit *%G?* show "G" for a good (valid) signature, "B" for a bad signature, "U" for a good signature with unknown validity, "X" for a good signature that has expired, "Y" for a good signature made by an expired key, "R" for a good signature made by a revoked key, "E" if the signature cannot be checked (e.g. missing key) and "N" for no signature *%GS* show the name of the signer for a signed commit *%GK* show the key used to sign a signed commit *%GF* show the fingerprint of the key used to sign a signed commit *%GP* show the fingerprint of the primary key whose subkey was used to sign a signed commit *%GT* show the trust level for the key used to sign a signed commit *%gD* reflog selector, e.g., `refs/stash@{1}` or `refs/stash@{2 minutes ago}`; the format follows the rules described for the `-g` option. The portion before the `@` is the refname as given on the command line (so `git log -g refs/heads/master` would yield `refs/heads/master@{0}`). *%gd* shortened reflog selector; same as `%gD`, but the refname portion is shortened for human readability (so `refs/heads/master` becomes just `master`). *%gn* reflog identity name *%gN* reflog identity name (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%ge* reflog identity email *%gE* reflog identity email (respecting .mailmap, see [git-shortlog[1]](git-shortlog) or [git-blame[1]](git-blame)) *%gs* reflog subject *%(trailers[:options])* display the trailers of the body as interpreted by [git-interpret-trailers[1]](git-interpret-trailers). The `trailers` string may be followed by a colon and zero or more comma-separated options. If any option is provided multiple times the last occurrence wins. - `key=<key>`: only show trailers with specified <key>. Matching is done case-insensitively and trailing colon is optional. If option is given multiple times trailer lines matching any of the keys are shown. This option automatically enables the `only` option so that non-trailer lines in the trailer block are hidden. If that is not desired it can be disabled with `only=false`. E.g., `%(trailers:key=Reviewed-by)` shows trailer lines with key `Reviewed-by`. - `only[=<bool>]`: select whether non-trailer lines from the trailer block should be included. - `separator=<sep>`: specify a separator inserted between trailer lines. When this option is not given each trailer line is terminated with a line feed character. The string <sep> may contain the literal formatting codes described above. To use comma as separator one must use `%x2C` as it would otherwise be parsed as next option. E.g., `%(trailers:key=Ticket,separator=%x2C )` shows all trailer lines whose key is "Ticket" separated by a comma and a space. - `unfold[=<bool>]`: make it behave as if interpret-trailer’s `--unfold` option was given. E.g., `%(trailers:only,unfold=true)` unfolds and shows all trailer lines. - `keyonly[=<bool>]`: only show the key part of the trailer. - `valueonly[=<bool>]`: only show the value part of the trailer. - `key_value_separator=<sep>`: specify a separator inserted between trailer lines. When this option is not given each trailer key-value pair is separated by ": ". Otherwise it shares the same semantics as `separator=<sep>` above. | | | | --- | --- | | Note | Some placeholders may depend on other options given to the revision traversal engine. For example, the `%g*` reflog options will insert an empty string unless we are traversing reflog entries (e.g., by `git log -g`). The `%d` and `%D` placeholders will use the "short" decoration format if `--decorate` was not already provided on the command line. | The boolean options accept an optional value `[=<bool-value>]`. The values `true`, `false`, `on`, `off` etc. are all accepted. See the "boolean" sub-section in "EXAMPLES" in [git-config[1]](git-config). If a boolean option is given with no value, it’s enabled. If you add a `+` (plus sign) after `%` of a placeholder, a line-feed is inserted immediately before the expansion if and only if the placeholder expands to a non-empty string. If you add a `-` (minus sign) after `%` of a placeholder, all consecutive line-feeds immediately preceding the expansion are deleted if and only if the placeholder expands to an empty string. If you add a ` ` (space) after `%` of a placeholder, a space is inserted immediately before the expansion if and only if the placeholder expands to a non-empty string. * `tformat:` The `tformat:` format works exactly like `format:`, except that it provides "terminator" semantics instead of "separator" semantics. In other words, each commit has the message terminator character (usually a newline) appended, rather than a separator placed between entries. This means that the final entry of a single-line format will be properly terminated with a new line, just as the "oneline" format does. For example: ``` $ git log -2 --pretty=format:%h 4da45bef \ | perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/' 4da45be 7134973 -- NO NEWLINE $ git log -2 --pretty=tformat:%h 4da45bef \ | perl -pe '$_ .= " -- NO NEWLINE\n" unless /\n/' 4da45be 7134973 ``` In addition, any unrecognized string that has a `%` in it is interpreted as if it has `tformat:` in front of it. For example, these two are equivalent: ``` $ git log -2 --pretty=tformat:%h 4da45bef $ git log -2 --pretty=%h 4da45bef ``` Raw output format ----------------- The raw output format from "git-diff-index", "git-diff-tree", "git-diff-files" and "git diff --raw" are very similar. These commands all compare two sets of things; what is compared differs: git-diff-index <tree-ish> compares the <tree-ish> and the files on the filesystem. git-diff-index --cached <tree-ish> compares the <tree-ish> and the index. git-diff-tree [-r] <tree-ish-1> <tree-ish-2> [<pattern>…​] compares the trees named by the two arguments. git-diff-files [<pattern>…​] compares the index and the files on the filesystem. The "git-diff-tree" command begins its output by printing the hash of what is being compared. After that, all the commands print one output line per changed file. An output line is formatted this way: ``` in-place edit :100644 100644 bcd1234 0123456 M file0 copy-edit :100644 100644 abcd123 1234567 C68 file1 file2 rename-edit :100644 100644 abcd123 1234567 R86 file1 file3 create :000000 100644 0000000 1234567 A file4 delete :100644 000000 1234567 0000000 D file5 unmerged :000000 000000 0000000 0000000 U file6 ``` That is, from the left to the right: 1. a colon. 2. mode for "src"; 000000 if creation or unmerged. 3. a space. 4. mode for "dst"; 000000 if deletion or unmerged. 5. a space. 6. sha1 for "src"; 0{40} if creation or unmerged. 7. a space. 8. sha1 for "dst"; 0{40} if deletion, unmerged or "work tree out of sync with the index". 9. a space. 10. status, followed by optional "score" number. 11. a tab or a NUL when `-z` option is used. 12. path for "src" 13. a tab or a NUL when `-z` option is used; only exists for C or R. 14. path for "dst"; only exists for C or R. 15. an LF or a NUL when `-z` option is used, to terminate the record. Possible status letters are: * A: addition of a file * C: copy of a file into a new one * D: deletion of a file * M: modification of the contents or mode of a file * R: renaming of a file * T: change in the type of the file (regular file, symbolic link or submodule) * U: file is unmerged (you must complete the merge before it can be committed) * X: "unknown" change type (most probably a bug, please report it) Status letters C and R are always followed by a score (denoting the percentage of similarity between the source and target of the move or copy). Status letter M may be followed by a score (denoting the percentage of dissimilarity) for file rewrites. The sha1 for "dst" is shown as all 0’s if a file on the filesystem is out of sync with the index. Example: ``` :100644 100644 5be4a4a 0000000 M file.c ``` Without the `-z` option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). Using `-z` the filename is output verbatim and the line is terminated by a NUL byte. Diff format for merges ---------------------- "git-diff-tree", "git-diff-files" and "git-diff --raw" can take `-c` or `--cc` option to generate diff output also for merge commits. The output differs from the format described above in the following way: 1. there is a colon for each parent 2. there are more "src" modes and "src" sha1 3. status is concatenated status characters for each parent 4. no optional "score" number 5. tab-separated pathname(s) of the file For `-c` and `--cc`, only the destination or final path is shown even if the file was renamed on any side of history. With `--combined-all-paths`, the name of the path in each parent is shown followed by the name of the path in the merge commit. Examples for `-c` and `--cc` without `--combined-all-paths`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR phooey.c ``` Examples when `--combined-all-paths` added to either `-c` or `--cc`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c desc.c desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM foo.sh bar.sh bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR fooey.c fuey.c phooey.c ``` Note that `combined diff` lists only files which were modified from all parents. Generating patch text with -p ----------------------------- Running [git-diff[1]](git-diff), [git-log[1]](git-log), [git-show[1]](git-show), [git-diff-index[1]](git-diff-index), [git-diff-tree[1]](git-diff-tree), or [git-diff-files[1]](git-diff-files) with the `-p` option produces patch text. You can customize the creation of patch text via the `GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables (see [git[1]](git)), and the `diff` attribute (see [gitattributes[5]](gitattributes)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded with a "git diff" header that looks like this: ``` diff --git a/file1 b/file2 ``` The `a/` and `b/` filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, `/dev/null` is `not` used in place of the `a/` or `b/` filenames. When rename/copy is involved, `file1` and `file2` show the name of the source file of the rename/copy and the name of the file that rename/copy produces, respectively. 2. It is followed by one or more extended header lines: ``` old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> ``` File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the `a/` and `b/` prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). 4. All the `file1` files in the output refer to files before the commit, and all the `file2` files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: ``` diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a ``` 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in [gitattributes[5]](gitattributes) for details of how to tailor to this to specific languages. Combined diff format -------------------- Any diff-generating command can take the `-c` or `--cc` option to produce a `combined diff` when showing a merge. This is the default format when showing merges with [git-diff[1]](git-diff) or [git-show[1]](git-show). Note also that you can give suitable `--diff-merges` option to any of these commands to force generation of diffs in specific format. A "combined diff" format looks like this: ``` diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); ``` 1. It is preceded with a "git diff" header, that looks like this (when the `-c` option is used): ``` diff --combined file ``` or like this (when the `--cc` option is used): ``` diff --cc file ``` 2. It is followed by one or more extended header lines (this example shows a merge with two parents): ``` index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> ``` The `mode <mode>,<mode>..<mode>` line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected contents movement (renames and copying detection) are designed to work with diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by two-line from-file/to-file header ``` --- a/file +++ b/file ``` Similar to two-line header for traditional `unified` diff format, `/dev/null` is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file you get a N+1 line from-file/to-file header, where N is the number of parents in the merge commit ``` --- a/file --- a/file --- a/file +++ b/file ``` This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to `patch -p1`. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended `index` header: ``` @@@ <from-file-range> <from-file-range> <to-file-range> @@@ ``` There are (number of parents + 1) `@` characters in the chunk header for combined diff format. Unlike the traditional `unified` diff format, which shows two files A and B with a single column that has `-` (minus — appears in A but removed in B), `+` (plus — missing in A but added to B), or `" "` (space — unchanged) prefix, this format compares two or more files file1, file2,…​ with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how X’s line is different from it. A `-` character in the column N means that the line appears in fileN but it does not appear in the result. A `+` character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two `-` removals from both file1 and file2, plus `++` to mean one line that was added does not appear in either file1 or file2). Also eight other lines are the same from file1 but do not appear in file2 (hence prefixed with `+`). When shown by `git diff-tree -c`, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by `git diff-files -c`, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). Other diff formats ------------------ The `--summary` option describes newly added, deleted, renamed and copied files. The `--stat` option adds diffstat(1) graph to the output. These options can be combined with other options, such as `-p`, and are meant for human consumption. When showing a change that involves a rename or a copy, `--stat` output formats the pathnames compactly by combining common prefix and suffix of the pathnames. For example, a change that moves `arch/i386/Makefile` to `arch/x86/Makefile` while modifying 4 lines will be shown like this: ``` arch/{i386 => x86}/Makefile | 4 +-- ``` The `--numstat` option gives the diffstat(1) information but is designed for easier machine consumption. An entry in `--numstat` output looks like this: ``` 1 2 README 3 1 arch/{i386 => x86}/Makefile ``` That is, from left to right: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. pathname (possibly with rename/copy information); 6. a newline. When `-z` output option is in effect, the output is formatted this way: ``` 1 2 README NUL 3 1 NUL arch/i386/Makefile NUL arch/x86/Makefile NUL ``` That is: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. a NUL (only exists if renamed/copied); 6. pathname in preimage; 7. a NUL (only exists if renamed/copied); 8. pathname in postimage (only exists if renamed/copied); 9. a NUL. The extra `NUL` before the preimage path in renamed case is to allow scripts that read the output to tell if the current record being read is a single-path record or a rename/copy record without reading ahead. After reading added and deleted lines, reading up to `NUL` would yield the pathname, but if that is `NUL`, the record will show two paths.
programming_docs
git git-quiltimport git-quiltimport =============== Name ---- git-quiltimport - Applies a quilt patchset onto the current branch Synopsis -------- ``` git quiltimport [--dry-run | -n] [--author <author>] [--patches <dir>] [--series <file>] [--keep-non-patch] ``` Description ----------- Applies a quilt patchset onto the current Git branch, preserving the patch boundaries, patch order, and patch descriptions present in the quilt patchset. For each patch the code attempts to extract the author from the patch description. If that fails it falls back to the author specified with --author. If the --author flag was not given the patch description is displayed and the user is asked to interactively enter the author of the patch. If a subject is not found in the patch description the patch name is preserved as the 1 line subject in the Git description. Options ------- -n --dry-run Walk through the patches in the series and warn if we cannot find all of the necessary information to commit a patch. At the time of this writing only missing author information is warned about. --author Author Name <Author Email> The author name and email address to use when no author information can be found in the patch description. --patches <dir> The directory to find the quilt patches. The default for the patch directory is patches or the value of the `$QUILT_PATCHES` environment variable. --series <file> The quilt series file. The default for the series file is <patches>/series or the value of the `$QUILT_SERIES` environment variable. --keep-non-patch Pass `-b` flag to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)). git git-update-server-info git-update-server-info ====================== Name ---- git-update-server-info - Update auxiliary info file to help dumb servers Synopsis -------- ``` git update-server-info [-f | --force] ``` Description ----------- A dumb server that does not do on-the-fly pack generations must have some auxiliary information files in $GIT\_DIR/info and $GIT\_OBJECT\_DIRECTORY/info directories to help clients discover what references and packs the server has. This command generates such auxiliary files. Options ------- -f --force update the info files from scratch. Output ------ Currently the command updates the following files. Please see [gitrepository-layout[5]](gitrepository-layout) for description of what they are for: * objects/info/packs * info/refs git git-filter-branch git-filter-branch ================= Name ---- git-filter-branch - Rewrite branches Synopsis -------- ``` git filter-branch [--setup <command>] [--subdirectory-filter <directory>] [--env-filter <command>] [--tree-filter <command>] [--index-filter <command>] [--parent-filter <command>] [--msg-filter <command>] [--commit-filter <command>] [--tag-name-filter <command>] [--prune-empty] [--original <namespace>] [-d <directory>] [-f | --force] [--state-branch <branch>] [--] [<rev-list options>…​] ``` Warning ------- `git filter-branch` has a plethora of pitfalls that can produce non-obvious manglings of the intended history rewrite (and can leave you with little time to investigate such problems since it has such abysmal performance). These safety and performance issues cannot be backward compatibly fixed and as such, its use is not recommended. Please use an alternative history filtering tool such as [git filter-repo](https://github.com/newren/git-filter-repo/). If you still need to use `git filter-branch`, please carefully read [SAFETY](#SAFETY) (and [PERFORMANCE](#PERFORMANCE)) to learn about the land mines of filter-branch, and then vigilantly avoid as many of the hazards listed there as reasonably possible. Description ----------- Lets you rewrite Git revision history by rewriting the branches mentioned in the <rev-list options>, applying custom filters on each revision. Those filters can modify each tree (e.g. removing a file or running a perl rewrite on all files) or information about each commit. Otherwise, all information (including original commit times or merge information) will be preserved. The command will only rewrite the `positive` refs mentioned in the command line (e.g. if you pass `a..b`, only `b` will be rewritten). If you specify no filters, the commits will be recommitted without any changes, which would normally have no effect. Nevertheless, this may be useful in the future for compensating for some Git bugs or such, therefore such a usage is permitted. **NOTE**: This command honors `.git/info/grafts` file and refs in the `refs/replace/` namespace. If you have any grafts or replacement refs defined, running this command will make them permanent. **WARNING**! The rewritten history will have different object names for all the objects and will not converge with the original branch. You will not be able to easily push and distribute the rewritten branch on top of the original branch. Please do not use this command if you do not know the full implications, and avoid using it anyway, if a simple single commit would suffice to fix your problem. (See the "RECOVERING FROM UPSTREAM REBASE" section in [git-rebase[1]](git-rebase) for further information about rewriting published history.) Always verify that the rewritten version is correct: The original refs, if different from the rewritten ones, will be stored in the namespace `refs/original/`. Note that since this operation is very I/O expensive, it might be a good idea to redirect the temporary directory off-disk with the `-d` option, e.g. on tmpfs. Reportedly the speedup is very noticeable. ### Filters The filters are applied in the order as listed below. The <command> argument is always evaluated in the shell context using the `eval` command (with the notable exception of the commit filter, for technical reasons). Prior to that, the `$GIT_COMMIT` environment variable will be set to contain the id of the commit being rewritten. Also, GIT\_AUTHOR\_NAME, GIT\_AUTHOR\_EMAIL, GIT\_AUTHOR\_DATE, GIT\_COMMITTER\_NAME, GIT\_COMMITTER\_EMAIL, and GIT\_COMMITTER\_DATE are taken from the current commit and exported to the environment, in order to affect the author and committer identities of the replacement commit created by [git-commit-tree[1]](git-commit-tree) after the filters have run. If any evaluation of <command> returns a non-zero exit status, the whole operation will be aborted. A `map` function is available that takes an "original sha1 id" argument and outputs a "rewritten sha1 id" if the commit has been already rewritten, and "original sha1 id" otherwise; the `map` function can return several ids on separate lines if your commit filter emitted multiple commits. Options ------- --setup <command> This is not a real filter executed for each commit but a one time setup just before the loop. Therefore no commit-specific variables are defined yet. Functions or variables defined here can be used or modified in the following filter steps except the commit filter, for technical reasons. --subdirectory-filter <directory> Only look at the history which touches the given subdirectory. The result will contain that directory (and only that) as its project root. Implies [Remap to ancestor](#Remap_to_ancestor). --env-filter <command> This filter may be used if you only need to modify the environment in which the commit will be performed. Specifically, you might want to rewrite the author/committer name/email/time environment variables (see [git-commit-tree[1]](git-commit-tree) for details). --tree-filter <command> This is the filter for rewriting the tree and its contents. The argument is evaluated in shell with the working directory set to the root of the checked out tree. The new tree is then used as-is (new files are auto-added, disappeared files are auto-removed - neither .gitignore files nor any other ignore rules **HAVE ANY EFFECT**!). --index-filter <command> This is the filter for rewriting the index. It is similar to the tree filter but does not check out the tree, which makes it much faster. Frequently used with `git rm --cached --ignore-unmatch ...`, see EXAMPLES below. For hairy cases, see [git-update-index[1]](git-update-index). --parent-filter <command> This is the filter for rewriting the commit’s parent list. It will receive the parent string on stdin and shall output the new parent string on stdout. The parent string is in the format described in [git-commit-tree[1]](git-commit-tree): empty for the initial commit, "-p parent" for a normal commit and "-p parent1 -p parent2 -p parent3 …​" for a merge commit. --msg-filter <command> This is the filter for rewriting the commit messages. The argument is evaluated in the shell with the original commit message on standard input; its standard output is used as the new commit message. --commit-filter <command> This is the filter for performing the commit. If this filter is specified, it will be called instead of the `git commit-tree` command, with arguments of the form "<TREE\_ID> [(-p <PARENT\_COMMIT\_ID>)…​]" and the log message on stdin. The commit id is expected on stdout. As a special extension, the commit filter may emit multiple commit ids; in that case, the rewritten children of the original commit will have all of them as parents. You can use the `map` convenience function in this filter, and other convenience functions, too. For example, calling `skip_commit "$@"` will leave out the current commit (but not its changes! If you want that, use `git rebase` instead). You can also use the `git_commit_non_empty_tree "$@"` instead of `git commit-tree "$@"` if you don’t wish to keep commits with a single parent and that makes no change to the tree. --tag-name-filter <command> This is the filter for rewriting tag names. When passed, it will be called for every tag ref that points to a rewritten object (or to a tag object which points to a rewritten object). The original tag name is passed via standard input, and the new tag name is expected on standard output. The original tags are not deleted, but can be overwritten; use "--tag-name-filter cat" to simply update the tags. In this case, be very careful and make sure you have the old tags backed up in case the conversion has run afoul. Nearly proper rewriting of tag objects is supported. If the tag has a message attached, a new tag object will be created with the same message, author, and timestamp. If the tag has a signature attached, the signature will be stripped. It is by definition impossible to preserve signatures. The reason this is "nearly" proper, is because ideally if the tag did not change (points to the same object, has the same name, etc.) it should retain any signature. That is not the case, signatures will always be removed, buyer beware. There is also no support for changing the author or timestamp (or the tag message for that matter). Tags which point to other tags will be rewritten to point to the underlying commit. --prune-empty Some filters will generate empty commits that leave the tree untouched. This option instructs git-filter-branch to remove such commits if they have exactly one or zero non-pruned parents; merge commits will therefore remain intact. This option cannot be used together with `--commit-filter`, though the same effect can be achieved by using the provided `git_commit_non_empty_tree` function in a commit filter. --original <namespace> Use this option to set the namespace where the original commits will be stored. The default value is `refs/original`. -d <directory> Use this option to set the path to the temporary directory used for rewriting. When applying a tree filter, the command needs to temporarily check out the tree to some directory, which may consume considerable space in case of large projects. By default it does this in the `.git-rewrite/` directory but you can override that choice by this parameter. -f --force `git filter-branch` refuses to start with an existing temporary directory or when there are already refs starting with `refs/original/`, unless forced. --state-branch <branch> This option will cause the mapping from old to new objects to be loaded from named branch upon startup and saved as a new commit to that branch upon exit, enabling incremental of large trees. If `<branch>` does not exist it will be created. <rev-list options>…​ Arguments for `git rev-list`. All positive refs included by these options are rewritten. You may also specify options such as `--all`, but you must use `--` to separate them from the `git filter-branch` options. Implies [Remap to ancestor](#Remap_to_ancestor). ### Remap to ancestor By using [git-rev-list[1]](git-rev-list) arguments, e.g., path limiters, you can limit the set of revisions which get rewritten. However, positive refs on the command line are distinguished: we don’t let them be excluded by such limiters. For this purpose, they are instead rewritten to point at the nearest ancestor that was not excluded. Exit status ----------- On success, the exit status is `0`. If the filter can’t find any commits to rewrite, the exit status is `2`. On any other error, the exit status may be any other non-zero value. Examples -------- Suppose you want to remove a file (containing confidential information or copyright violation) from all commits: ``` git filter-branch --tree-filter 'rm filename' HEAD ``` However, if the file is absent from the tree of some commit, a simple `rm filename` will fail for that tree and commit. Thus you may instead want to use `rm -f filename` as the script. Using `--index-filter` with `git rm` yields a significantly faster version. Like with using `rm filename`, `git rm --cached filename` will fail if the file is absent from the tree of a commit. If you want to "completely forget" a file, it does not matter when it entered history, so we also add `--ignore-unmatch`: ``` git filter-branch --index-filter 'git rm --cached --ignore-unmatch filename' HEAD ``` Now, you will get the rewritten history saved in HEAD. To rewrite the repository to look as if `foodir/` had been its project root, and discard all other history: ``` git filter-branch --subdirectory-filter foodir -- --all ``` Thus you can, e.g., turn a library subdirectory into a repository of its own. Note the `--` that separates `filter-branch` options from revision options, and the `--all` to rewrite all branches and tags. To set a commit (which typically is at the tip of another history) to be the parent of the current initial commit, in order to paste the other history behind the current history: ``` git filter-branch --parent-filter 'sed "s/^\$/-p <graft-id>/"' HEAD ``` (if the parent string is empty - which happens when we are dealing with the initial commit - add graftcommit as a parent). Note that this assumes history with a single root (that is, no merge without common ancestors happened). If this is not the case, use: ``` git filter-branch --parent-filter \ 'test $GIT_COMMIT = <commit-id> && echo "-p <graft-id>" || cat' HEAD ``` or even simpler: ``` git replace --graft $commit-id $graft-id git filter-branch $graft-id..HEAD ``` To remove commits authored by "Darl McBribe" from the history: ``` git filter-branch --commit-filter ' if [ "$GIT_AUTHOR_NAME" = "Darl McBribe" ]; then skip_commit "$@"; else git commit-tree "$@"; fi' HEAD ``` The function `skip_commit` is defined as follows: ``` skip_commit() { shift; while [ -n "$1" ]; do shift; map "$1"; shift; done; } ``` The shift magic first throws away the tree id and then the -p parameters. Note that this handles merges properly! In case Darl committed a merge between P1 and P2, it will be propagated properly and all children of the merge will become merge commits with P1,P2 as their parents instead of the merge commit. **NOTE** the changes introduced by the commits, and which are not reverted by subsequent commits, will still be in the rewritten branch. If you want to throw out `changes` together with the commits, you should use the interactive mode of `git rebase`. You can rewrite the commit log messages using `--msg-filter`. For example, `git svn-id` strings in a repository created by `git svn` can be removed this way: ``` git filter-branch --msg-filter ' sed -e "/^git-svn-id:/d" ' ``` If you need to add `Acked-by` lines to, say, the last 10 commits (none of which is a merge), use this command: ``` git filter-branch --msg-filter ' cat && echo "Acked-by: Bugs Bunny <[email protected]>" ' HEAD~10..HEAD ``` The `--env-filter` option can be used to modify committer and/or author identity. For example, if you found out that your commits have the wrong identity due to a misconfigured user.email, you can make a correction, before publishing the project, like this: ``` git filter-branch --env-filter ' if test "$GIT_AUTHOR_EMAIL" = "root@localhost" then [email protected] fi if test "$GIT_COMMITTER_EMAIL" = "root@localhost" then [email protected] fi ' -- --all ``` To restrict rewriting to only part of the history, specify a revision range in addition to the new branch name. The new branch name will point to the top-most revision that a `git rev-list` of this range will print. Consider this history: ``` D--E--F--G--H / / A--B-----C ``` To rewrite only commits D,E,F,G,H, but leave A, B and C alone, use: ``` git filter-branch ... C..H ``` To rewrite commits E,F,G,H, use one of these: ``` git filter-branch ... C..H --not D git filter-branch ... D..H --not C ``` To move the whole tree into a subdirectory, or remove it from there: ``` git filter-branch --index-filter \ 'git ls-files -s | sed "s-\t\"*-&newsubdir/-" | GIT_INDEX_FILE=$GIT_INDEX_FILE.new \ git update-index --index-info && mv "$GIT_INDEX_FILE.new" "$GIT_INDEX_FILE"' HEAD ``` Checklist for shrinking a repository ------------------------------------ git-filter-branch can be used to get rid of a subset of files, usually with some combination of `--index-filter` and `--subdirectory-filter`. People expect the resulting repository to be smaller than the original, but you need a few more steps to actually make it smaller, because Git tries hard not to lose your objects until you tell it to. First make sure that: * You really removed all variants of a filename, if a blob was moved over its lifetime. `git log --name-only --follow --all -- filename` can help you find renames. * You really filtered all refs: use `--tag-name-filter cat -- --all` when calling git-filter-branch. Then there are two ways to get a smaller repository. A safer way is to clone, that keeps your original intact. * Clone it with `git clone file:///path/to/repo`. The clone will not have the removed objects. See [git-clone[1]](git-clone). (Note that cloning with a plain path just hardlinks everything!) If you really don’t want to clone it, for whatever reasons, check the following points instead (in this order). This is a very destructive approach, so **make a backup** or go back to cloning it. You have been warned. * Remove the original refs backed up by git-filter-branch: say `git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d`. * Expire all reflogs with `git reflog expire --expire=now --all`. * Garbage collect all unreferenced objects with `git gc --prune=now` (or if your git-gc is not new enough to support arguments to `--prune`, use `git repack -ad; git prune` instead). Performance ----------- The performance of git-filter-branch is glacially slow; its design makes it impossible for a backward-compatible implementation to ever be fast: * In editing files, git-filter-branch by design checks out each and every commit as it existed in the original repo. If your repo has `10^5` files and `10^5` commits, but each commit only modifies five files, then git-filter-branch will make you do `10^10` modifications, despite only having (at most) `5*10^5` unique blobs. * If you try and cheat and try to make git-filter-branch only work on files modified in a commit, then two things happen + you run into problems with deletions whenever the user is simply trying to rename files (because attempting to delete files that don’t exist looks like a no-op; it takes some chicanery to remap deletes across file renames when the renames happen via arbitrary user-provided shell) + even if you succeed at the map-deletes-for-renames chicanery, you still technically violate backward compatibility because users are allowed to filter files in ways that depend upon topology of commits instead of filtering solely based on file contents or names (though this has not been observed in the wild). * Even if you don’t need to edit files but only want to e.g. rename or remove some and thus can avoid checking out each file (i.e. you can use --index-filter), you still are passing shell snippets for your filters. This means that for every commit, you have to have a prepared git repo where those filters can be run. That’s a significant setup. * Further, several additional files are created or updated per commit by git-filter-branch. Some of these are for supporting the convenience functions provided by git-filter-branch (such as map()), while others are for keeping track of internal state (but could have also been accessed by user filters; one of git-filter-branch’s regression tests does so). This essentially amounts to using the filesystem as an IPC mechanism between git-filter-branch and the user-provided filters. Disks tend to be a slow IPC mechanism, and writing these files also effectively represents a forced synchronization point between separate processes that we hit with every commit. * The user-provided shell commands will likely involve a pipeline of commands, resulting in the creation of many processes per commit. Creating and running another process takes a widely varying amount of time between operating systems, but on any platform it is very slow relative to invoking a function. * git-filter-branch itself is written in shell, which is kind of slow. This is the one performance issue that could be backward-compatibly fixed, but compared to the above problems that are intrinsic to the design of git-filter-branch, the language of the tool itself is a relatively minor issue. + Side note: Unfortunately, people tend to fixate on the written-in-shell aspect and periodically ask if git-filter-branch could be rewritten in another language to fix the performance issues. Not only does that ignore the bigger intrinsic problems with the design, it’d help less than you’d expect: if git-filter-branch itself were not shell, then the convenience functions (map(), skip\_commit(), etc) and the `--setup` argument could no longer be executed once at the beginning of the program but would instead need to be prepended to every user filter (and thus re-executed with every commit). The [git filter-repo](https://github.com/newren/git-filter-repo/) tool is an alternative to git-filter-branch which does not suffer from these performance problems or the safety problems (mentioned below). For those with existing tooling which relies upon git-filter-branch, `git filter-repo` also provides [filter-lamely](https://github.com/newren/git-filter-repo/blob/master/contrib/filter-repo-demos/filter-lamely), a drop-in git-filter-branch replacement (with a few caveats). While filter-lamely suffers from all the same safety issues as git-filter-branch, it at least ameliorates the performance issues a little. Safety ------ git-filter-branch is riddled with gotchas resulting in various ways to easily corrupt repos or end up with a mess worse than what you started with: * Someone can have a set of "working and tested filters" which they document or provide to a coworker, who then runs them on a different OS where the same commands are not working/tested (some examples in the git-filter-branch manpage are also affected by this). BSD vs. GNU userland differences can really bite. If lucky, error messages are spewed. But just as likely, the commands either don’t do the filtering requested, or silently corrupt by making some unwanted change. The unwanted change may only affect a few commits, so it’s not necessarily obvious either. (The fact that problems won’t necessarily be obvious means they are likely to go unnoticed until the rewritten history is in use for quite a while, at which point it’s really hard to justify another flag-day for another rewrite.) * Filenames with spaces are often mishandled by shell snippets since they cause problems for shell pipelines. Not everyone is familiar with find -print0, xargs -0, git-ls-files -z, etc. Even people who are familiar with these may assume such flags are not relevant because someone else renamed any such files in their repo back before the person doing the filtering joined the project. And often, even those familiar with handling arguments with spaces may not do so just because they aren’t in the mindset of thinking about everything that could possibly go wrong. * Non-ascii filenames can be silently removed despite being in a desired directory. Keeping only wanted paths is often done using pipelines like `git ls-files | grep -v ^WANTED_DIR/ | xargs git rm`. ls-files will only quote filenames if needed, so folks may not notice that one of the files didn’t match the regex (at least not until it’s much too late). Yes, someone who knows about core.quotePath can avoid this (unless they have other special characters like \t, \n, or "), and people who use ls-files -z with something other than grep can avoid this, but that doesn’t mean they will. * Similarly, when moving files around, one can find that filenames with non-ascii or special characters end up in a different directory, one that includes a double quote character. (This is technically the same issue as above with quoting, but perhaps an interesting different way that it can and has manifested as a problem.) * It’s far too easy to accidentally mix up old and new history. It’s still possible with any tool, but git-filter-branch almost invites it. If lucky, the only downside is users getting frustrated that they don’t know how to shrink their repo and remove the old stuff. If unlucky, they merge old and new history and end up with multiple "copies" of each commit, some of which have unwanted or sensitive files and others which don’t. This comes about in multiple different ways: + the default to only doing a partial history rewrite (`--all` is not the default and few examples show it) + the fact that there’s no automatic post-run cleanup + the fact that --tag-name-filter (when used to rename tags) doesn’t remove the old tags but just adds new ones with the new name + the fact that little educational information is provided to inform users of the ramifications of a rewrite and how to avoid mixing old and new history. For example, this man page discusses how users need to understand that they need to rebase their changes for all their branches on top of new history (or delete and reclone), but that’s only one of multiple concerns to consider. See the "DISCUSSION" section of the git filter-repo manual page for more details. * Annotated tags can be accidentally converted to lightweight tags, due to either of two issues: + Someone can do a history rewrite, realize they messed up, restore from the backups in refs/original/, and then redo their git-filter-branch command. (The backup in refs/original/ is not a real backup; it dereferences tags first.) + Running git-filter-branch with either --tags or --all in your <rev-list options>. In order to retain annotated tags as annotated, you must use --tag-name-filter (and must not have restored from refs/original/ in a previously botched rewrite). * Any commit messages that specify an encoding will become corrupted by the rewrite; git-filter-branch ignores the encoding, takes the original bytes, and feeds it to commit-tree without telling it the proper encoding. (This happens whether or not --msg-filter is used.) * Commit messages (even if they are all UTF-8) by default become corrupted due to not being updated — any references to other commit hashes in commit messages will now refer to no-longer-extant commits. * There are no facilities for helping users find what unwanted crud they should delete, which means they are much more likely to have incomplete or partial cleanups that sometimes result in confusion and people wasting time trying to understand. (For example, folks tend to just look for big files to delete instead of big directories or extensions, and once they do so, then sometime later folks using the new repository who are going through history will notice a build artifact directory that has some files but not others, or a cache of dependencies (node\_modules or similar) which couldn’t have ever been functional since it’s missing some files.) * If --prune-empty isn’t specified, then the filtering process can create hoards of confusing empty commits * If --prune-empty is specified, then intentionally placed empty commits from before the filtering operation are also pruned instead of just pruning commits that became empty due to filtering rules. * If --prune-empty is specified, sometimes empty commits are missed and left around anyway (a somewhat rare bug, but it happens…​) * A minor issue, but users who have a goal to update all names and emails in a repository may be led to --env-filter which will only update authors and committers, missing taggers. * If the user provides a --tag-name-filter that maps multiple tags to the same name, no warning or error is provided; git-filter-branch simply overwrites each tag in some undocumented pre-defined order resulting in only one tag at the end. (A git-filter-branch regression test requires this surprising behavior.) Also, the poor performance of git-filter-branch often leads to safety issues: * Coming up with the correct shell snippet to do the filtering you want is sometimes difficult unless you’re just doing a trivial modification such as deleting a couple files. Unfortunately, people often learn if the snippet is right or wrong by trying it out, but the rightness or wrongness can vary depending on special circumstances (spaces in filenames, non-ascii filenames, funny author names or emails, invalid timezones, presence of grafts or replace objects, etc.), meaning they may have to wait a long time, hit an error, then restart. The performance of git-filter-branch is so bad that this cycle is painful, reducing the time available to carefully re-check (to say nothing about what it does to the patience of the person doing the rewrite even if they do technically have more time available). This problem is extra compounded because errors from broken filters may not be shown for a long time and/or get lost in a sea of output. Even worse, broken filters often just result in silent incorrect rewrites. * To top it all off, even when users finally find working commands, they naturally want to share them. But they may be unaware that their repo didn’t have some special cases that someone else’s does. So, when someone else with a different repository runs the same commands, they get hit by the problems above. Or, the user just runs commands that really were vetted for special cases, but they run it on a different OS where it doesn’t work, as noted above.
programming_docs
git git-bundle git-bundle ========== Name ---- git-bundle - Move objects and refs by archive Synopsis -------- ``` git bundle create [-q | --quiet | --progress | --all-progress] [--all-progress-implied] [--version=<version>] <file> <git-rev-list-args> git bundle verify [-q | --quiet] <file> git bundle list-heads <file> [<refname>…​] git bundle unbundle [--progress] <file> [<refname>…​] ``` Description ----------- Create, unpack, and manipulate "bundle" files. Bundles are used for the "offline" transfer of Git objects without an active "server" sitting on the other side of the network connection. They can be used to create both incremental and full backups of a repository, and to relay the state of the references in one repository to another. Git commands that fetch or otherwise "read" via protocols such as `ssh://` and `https://` can also operate on bundle files. It is possible [git-clone[1]](git-clone) a new repository from a bundle, to use [git-fetch[1]](git-fetch) to fetch from one, and to list the references contained within it with [git-ls-remote[1]](git-ls-remote). There’s no corresponding "write" support, i.e.a `git push` into a bundle is not supported. See the "EXAMPLES" section below for examples of how to use bundles. Bundle format ------------- Bundles are `.pack` files (see [git-pack-objects[1]](git-pack-objects)) with a header indicating what references are contained within the bundle. Like the packed archive format itself bundles can either be self-contained, or be created using exclusions. See the "OBJECT PREREQUISITES" section below. Bundles created using revision exclusions are "thin packs" created using the `--thin` option to [git-pack-objects[1]](git-pack-objects), and unbundled using the `--fix-thin` option to [git-index-pack[1]](git-index-pack). There is no option to create a "thick pack" when using revision exclusions, and users should not be concerned about the difference. By using "thin packs", bundles created using exclusions are smaller in size. That they’re "thin" under the hood is merely noted here as a curiosity, and as a reference to other documentation. See [gitformat-bundle[5]](gitformat-bundle) for more details and the discussion of "thin pack" in [gitformat-pack[5]](gitformat-pack) for further details. Options ------- create [options] <file> <git-rev-list-args> Used to create a bundle named `file`. This requires the `<git-rev-list-args>` arguments to define the bundle contents. `options` contains the options specific to the `git bundle create` subcommand. verify <file> Used to check that a bundle file is valid and will apply cleanly to the current repository. This includes checks on the bundle format itself as well as checking that the prerequisite commits exist and are fully linked in the current repository. Then, `git bundle` prints a list of missing commits, if any. Finally, information about additional capabilities, such as "object filter", is printed. See "Capabilities" in [gitformat-bundle[5]](gitformat-bundle) for more information. The exit code is zero for success, but will be nonzero if the bundle file is invalid. list-heads <file> Lists the references defined in the bundle. If followed by a list of references, only references matching those given are printed out. unbundle <file> Passes the objects in the bundle to `git index-pack` for storage in the repository, then prints the names of all defined references. If a list of references is given, only references matching those in the list are printed. This command is really plumbing, intended to be called only by `git fetch`. <git-rev-list-args> A list of arguments, acceptable to `git rev-parse` and `git rev-list` (and containing a named ref, see SPECIFYING REFERENCES below), that specifies the specific objects and references to transport. For example, `master~10..master` causes the current master reference to be packaged along with all objects added since its 10th ancestor commit. There is no explicit limit to the number of references and objects that may be packaged. [<refname>…​] A list of references used to limit the references reported as available. This is principally of use to `git fetch`, which expects to receive only those references asked for and not necessarily everything in the pack (in this case, `git bundle` acts like `git fetch-pack`). --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --all-progress When --stdout is specified then progress report is displayed during the object count and compression phases but inhibited during the write-out phase. The reason is that in some cases the output stream is directly linked to another command which may wish to display progress status of its own as it processes incoming pack data. This flag is like --progress except that it forces progress report for the write-out phase as well even if --stdout is used. --all-progress-implied This is used to imply --all-progress whenever progress display is activated. Unlike --all-progress this flag doesn’t actually force any progress display by itself. --version=<version> Specify the bundle version. Version 2 is the older format and can only be used with SHA-1 repositories; the newer version 3 contains capabilities that permit extensions. The default is the oldest supported format, based on the hash algorithm in use. -q --quiet This flag makes the command not to report its progress on the standard error stream. Specifying references --------------------- Revisions must be accompanied by reference names to be packaged in a bundle. More than one reference may be packaged, and more than one set of prerequisite objects can be specified. The objects packaged are those not contained in the union of the prerequisites. The `git bundle create` command resolves the reference names for you using the same rules as `git rev-parse --abbrev-ref=loose`. Each prerequisite can be specified explicitly (e.g. `^master~10`), or implicitly (e.g. `master~10..master`, `--since=10.days.ago master`). All of these simple cases are OK (assuming we have a "master" and "next" branch): ``` $ git bundle create master.bundle master $ echo master | git bundle create master.bundle --stdin $ git bundle create master-and-next.bundle master next $ (echo master; echo next) | git bundle create master-and-next.bundle --stdin ``` And so are these (and the same but omitted `--stdin` examples): ``` $ git bundle create recent-master.bundle master~10..master $ git bundle create recent-updates.bundle master~10..master next~5..next ``` A revision name or a range whose right-hand-side cannot be resolved to a reference is not accepted: ``` $ git bundle create HEAD.bundle $(git rev-parse HEAD) fatal: Refusing to create empty bundle. $ git bundle create master-yesterday.bundle master~10..master~5 fatal: Refusing to create empty bundle. ``` Object prerequisites -------------------- When creating bundles it is possible to create a self-contained bundle that can be unbundled in a repository with no common history, as well as providing negative revisions to exclude objects needed in the earlier parts of the history. Feeding a revision such as `new` to `git bundle create` will create a bundle file that contains all the objects reachable from the revision `new`. That bundle can be unbundled in any repository to obtain a full history that leads to the revision `new`: ``` $ git bundle create full.bundle new ``` A revision range such as `old..new` will produce a bundle file that will require the revision `old` (and any objects reachable from it) to exist for the bundle to be "unbundle"-able: ``` $ git bundle create full.bundle old..new ``` A self-contained bundle without any prerequisites can be extracted into anywhere, even into an empty repository, or be cloned from (i.e., `new`, but not `old..new`). It is okay to err on the side of caution, causing the bundle file to contain objects already in the destination, as these are ignored when unpacking at the destination. If you want to match `git clone --mirror`, which would include your refs such as `refs/remotes/*`, use `--all`. If you want to provide the same set of refs that a clone directly from the source repository would get, use `--branches --tags` for the `<git-rev-list-args>`. The `git bundle verify` command can be used to check whether your recipient repository has the required prerequisite commits for a bundle. Examples -------- Assume you want to transfer the history from a repository R1 on machine A to another repository R2 on machine B. For whatever reason, direct connection between A and B is not allowed, but we can move data from A to B via some mechanism (CD, email, etc.). We want to update R2 with development made on the branch master in R1. To bootstrap the process, you can first create a bundle that does not have any prerequisites. You can use a tag to remember up to what commit you last processed, in order to make it easy to later update the other repository with an incremental bundle: ``` machineA$ cd R1 machineA$ git bundle create file.bundle master machineA$ git tag -f lastR2bundle master ``` Then you transfer file.bundle to the target machine B. Because this bundle does not require any existing object to be extracted, you can create a new repository on machine B by cloning from it: ``` machineB$ git clone -b master /home/me/tmp/file.bundle R2 ``` This will define a remote called "origin" in the resulting repository that lets you fetch and pull from the bundle. The $GIT\_DIR/config file in R2 will have an entry like this: ``` [remote "origin"] url = /home/me/tmp/file.bundle fetch = refs/heads/*:refs/remotes/origin/* ``` To update the resulting mine.git repository, you can fetch or pull after replacing the bundle stored at /home/me/tmp/file.bundle with incremental updates. After working some more in the original repository, you can create an incremental bundle to update the other repository: ``` machineA$ cd R1 machineA$ git bundle create file.bundle lastR2bundle..master machineA$ git tag -f lastR2bundle master ``` You then transfer the bundle to the other machine to replace /home/me/tmp/file.bundle, and pull from it. ``` machineB$ cd R2 machineB$ git pull ``` If you know up to what commit the intended recipient repository should have the necessary objects, you can use that knowledge to specify the prerequisites, giving a cut-off point to limit the revisions and objects that go in the resulting bundle. The previous example used the lastR2bundle tag for this purpose, but you can use any other options that you would give to the [git-log[1]](git-log) command. Here are more examples: You can use a tag that is present in both: ``` $ git bundle create mybundle v1.0.0..master ``` You can use a prerequisite based on time: ``` $ git bundle create mybundle --since=10.days master ``` You can use the number of commits: ``` $ git bundle create mybundle -10 master ``` You can run `git-bundle verify` to see if you can extract from a bundle that was created with a prerequisite: ``` $ git bundle verify mybundle ``` This will list what commits you must have in order to extract from the bundle and will error out if you do not have them. A bundle from a recipient repository’s point of view is just like a regular repository which it fetches or pulls from. You can, for example, map references when fetching: ``` $ git fetch mybundle master:localRef ``` You can also see what references it offers: ``` $ git ls-remote mybundle ``` File format ----------- See [gitformat-bundle[5]](gitformat-bundle). git git-add git-add ======= Name ---- git-add - Add file contents to the index Synopsis -------- ``` git add [--verbose | -v] [--dry-run | -n] [--force | -f] [--interactive | -i] [--patch | -p] [--edit | -e] [--[no-]all | --[no-]ignore-removal | [--update | -u]] [--sparse] [--intent-to-add | -N] [--refresh] [--ignore-errors] [--ignore-missing] [--renormalize] [--chmod=(+|-)x] [--pathspec-from-file=<file> [--pathspec-file-nul]] [--] [<pathspec>…​] ``` Description ----------- This command updates the index using the current content found in the working tree, to prepare the content staged for the next commit. It typically adds the current content of existing paths as a whole, but with some options it can also be used to add content with only part of the changes made to the working tree files applied, or remove paths that do not exist in the working tree anymore. The "index" holds a snapshot of the content of the working tree, and it is this snapshot that is taken as the contents of the next commit. Thus after making any changes to the working tree, and before running the commit command, you must use the `add` command to add any new or modified files to the index. This command can be performed multiple times before a commit. It only adds the content of the specified file(s) at the time the add command is run; if you want subsequent changes included in the next commit, then you must run `git add` again to add the new content to the index. The `git status` command can be used to obtain a summary of which files have changes that are staged for the next commit. The `git add` command will not add ignored files by default. If any ignored files were explicitly specified on the command line, `git add` will fail with a list of ignored files. Ignored files reached by directory recursion or filename globbing performed by Git (quote your globs before the shell) will be silently ignored. The `git add` command can be used to add ignored files with the `-f` (force) option. Please see [git-commit[1]](git-commit) for alternative ways to add content to a commit. Options ------- <pathspec>…​ Files to add content from. Fileglobs (e.g. `*.c`) can be given to add all matching files. Also a leading directory name (e.g. `dir` to add `dir/file1` and `dir/file2`) can be given to update the index to match the current state of the directory as a whole (e.g. specifying `dir` will record not just a file `dir/file1` modified in the working tree, a file `dir/file2` added to the working tree, but also a file `dir/file3` removed from the working tree). Note that older versions of Git used to ignore removed files; use `--no-all` option if you want to add modified or new files but ignore removed ones. For more details about the <pathspec> syntax, see the `pathspec` entry in [gitglossary[7]](gitglossary). -n --dry-run Don’t actually add the file(s), just show if they exist and/or will be ignored. -v --verbose Be verbose. -f --force Allow adding otherwise ignored files. --sparse Allow updating index entries outside of the sparse-checkout cone. Normally, `git add` refuses to update index entries whose paths do not fit within the sparse-checkout cone, since those files might be removed from the working tree without warning. See [git-sparse-checkout[1]](git-sparse-checkout) for more details. -i --interactive Add modified contents in the working tree interactively to the index. Optional path arguments may be supplied to limit operation to a subset of the working tree. See “Interactive mode” for details. -p --patch Interactively choose hunks of patch between the index and the work tree and add them to the index. This gives the user a chance to review the difference before adding modified contents to the index. This effectively runs `add --interactive`, but bypasses the initial command menu and directly jumps to the `patch` subcommand. See “Interactive mode” for details. -e --edit Open the diff vs. the index in an editor and let the user edit it. After the editor was closed, adjust the hunk headers and apply the patch to the index. The intent of this option is to pick and choose lines of the patch to apply, or even to modify the contents of lines to be staged. This can be quicker and more flexible than using the interactive hunk selector. However, it is easy to confuse oneself and create a patch that does not apply to the index. See EDITING PATCHES below. -u --update Update the index just where it already has an entry matching <pathspec>. This removes as well as modifies index entries to match the working tree, but adds no new files. If no <pathspec> is given when `-u` option is used, all tracked files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories). -A --all --no-ignore-removal Update the index not only where the working tree has a file matching <pathspec> but also where the index already has an entry. This adds, modifies, and removes index entries to match the working tree. If no <pathspec> is given when `-A` option is used, all files in the entire working tree are updated (old versions of Git used to limit the update to the current directory and its subdirectories). --no-all --ignore-removal Update the index by adding new files that are unknown to the index and files modified in the working tree, but ignore files that have been removed from the working tree. This option is a no-op when no <pathspec> is used. This option is primarily to help users who are used to older versions of Git, whose "git add <pathspec>…​" was a synonym for "git add --no-all <pathspec>…​", i.e. ignored removed files. -N --intent-to-add Record only the fact that the path will be added later. An entry for the path is placed in the index with no content. This is useful for, among other things, showing the unstaged content of such files with `git diff` and committing them with `git commit -a`. --refresh Don’t add the file(s), but only refresh their stat() information in the index. --ignore-errors If some files could not be added because of errors indexing them, do not abort the operation, but continue adding the others. The command shall still exit with non-zero status. The configuration variable `add.ignoreErrors` can be set to true to make this the default behaviour. --ignore-missing This option can only be used together with --dry-run. By using this option the user can check if any of the given files would be ignored, no matter if they are already present in the work tree or not. --no-warn-embedded-repo By default, `git add` will warn when adding an embedded repository to the index without using `git submodule add` to create an entry in `.gitmodules`. This option will suppress the warning (e.g., if you are manually performing operations on submodules). --renormalize Apply the "clean" process freshly to all tracked files to forcibly add them again to the index. This is useful after changing `core.autocrlf` configuration or the `text` attribute in order to correct files added with wrong CRLF/LF line endings. This option implies `-u`. Lone CR characters are untouched, thus while a CRLF cleans to LF, a CRCRLF sequence is only partially cleaned to CRLF. --chmod=(+|-)x Override the executable bit of the added files. The executable bit is only changed in the index, the files on disk are left unchanged. --pathspec-from-file=<file> Pathspec is passed in `<file>` instead of commandline args. If `<file>` is exactly `-` then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). See also `--pathspec-file-nul` and global `--literal-pathspecs`. --pathspec-file-nul Only meaningful with `--pathspec-from-file`. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -- This option can be used to separate command-line options from the list of files, (useful when filenames might be mistaken for command-line options). Examples -------- * Adds content from all `*.txt` files under `Documentation` directory and its subdirectories: ``` $ git add Documentation/\*.txt ``` Note that the asterisk `*` is quoted from the shell in this example; this lets the command include the files from subdirectories of `Documentation/` directory. * Considers adding content from all git-\*.sh scripts: ``` $ git add git-*.sh ``` Because this example lets the shell expand the asterisk (i.e. you are listing the files explicitly), it does not consider `subdir/git-foo.sh`. Interactive mode ---------------- When the command enters the interactive mode, it shows the output of the `status` subcommand, and then goes into its interactive command loop. The command loop shows the list of subcommands available, and gives a prompt "What now> ". In general, when the prompt ends with a single `>`, you can pick only one of the choices given and type return, like this: ``` *** Commands *** 1: status 2: update 3: revert 4: add untracked 5: patch 6: diff 7: quit 8: help What now> 1 ``` You also could say `s` or `sta` or `status` above as long as the choice is unique. The main command loop has 6 subcommands (plus help and quit). status This shows the change between HEAD and index (i.e. what will be committed if you say `git commit`), and between index and working tree files (i.e. what you could stage further before `git commit` using `git add`) for each path. A sample output looks like this: ``` staged unstaged path 1: binary nothing foo.png 2: +403/-35 +1/-1 git-add--interactive.perl ``` It shows that foo.png has differences from HEAD (but that is binary so line count cannot be shown) and there is no difference between indexed copy and the working tree version (if the working tree version were also different, `binary` would have been shown in place of `nothing`). The other file, git-add--interactive.perl, has 403 lines added and 35 lines deleted if you commit what is in the index, but working tree file has further modifications (one addition and one deletion). update This shows the status information and issues an "Update>>" prompt. When the prompt ends with double `>>`, you can make more than one selection, concatenated with whitespace or comma. Also you can say ranges. E.g. "2-5 7,9" to choose 2,3,4,5,7,9 from the list. If the second number in a range is omitted, all remaining patches are taken. E.g. "7-" to choose 7,8,9 from the list. You can say `*` to choose everything. What you chose are then highlighted with `*`, like this: ``` staged unstaged path 1: binary nothing foo.png * 2: +403/-35 +1/-1 git-add--interactive.perl ``` To remove selection, prefix the input with `-` like this: ``` Update>> -2 ``` After making the selection, answer with an empty line to stage the contents of working tree files for selected paths in the index. revert This has a very similar UI to `update`, and the staged information for selected paths are reverted to that of the HEAD version. Reverting new paths makes them untracked. add untracked This has a very similar UI to `update` and `revert`, and lets you add untracked paths to the index. patch This lets you choose one path out of a `status` like selection. After choosing the path, it presents the diff between the index and the working tree file and asks you if you want to stage the change of each hunk. You can select one of the following options and type return: ``` y - stage this hunk n - do not stage this hunk q - quit; do not stage this hunk or any of the remaining ones a - stage this hunk and all later hunks in the file d - do not stage this hunk or any of the later hunks in the file g - select a hunk to go to / - search for a hunk matching the given regex j - leave this hunk undecided, see next undecided hunk J - leave this hunk undecided, see next hunk k - leave this hunk undecided, see previous undecided hunk K - leave this hunk undecided, see previous hunk s - split the current hunk into smaller hunks e - manually edit the current hunk ? - print help ``` After deciding the fate for all hunks, if there is any hunk that was chosen, the index is updated with the selected hunks. You can omit having to type return here, by setting the configuration variable `interactive.singleKey` to `true`. diff This lets you review what will be committed (i.e. between HEAD and index). Editing patches --------------- Invoking `git add -e` or selecting `e` from the interactive hunk selector will open a patch in your editor; after the editor exits, the result is applied to the index. You are free to make arbitrary changes to the patch, but note that some changes may have confusing results, or even result in a patch that cannot be applied. If you want to abort the operation entirely (i.e., stage nothing new in the index), simply delete all lines of the patch. The list below describes some common things you may see in a patch, and which editing operations make sense on them. added content Added content is represented by lines beginning with "+". You can prevent staging any addition lines by deleting them. removed content Removed content is represented by lines beginning with "-". You can prevent staging their removal by converting the "-" to a " " (space). modified content Modified content is represented by "-" lines (removing the old content) followed by "+" lines (adding the replacement content). You can prevent staging the modification by converting "-" lines to " ", and removing "+" lines. Beware that modifying only half of the pair is likely to introduce confusing changes to the index. There are also more complex operations that can be performed. But beware that because the patch is applied only to the index and not the working tree, the working tree will appear to "undo" the change in the index. For example, introducing a new line into the index that is in neither the HEAD nor the working tree will stage the new line for commit, but the line will appear to be reverted in the working tree. Avoid using these constructs, or do so with extreme caution. removing untouched content Content which does not differ between the index and working tree may be shown on context lines, beginning with a " " (space). You can stage context lines for removal by converting the space to a "-". The resulting working tree file will appear to re-add the content. modifying existing content One can also modify context lines by staging them for removal (by converting " " to "-") and adding a "+" line with the new content. Similarly, one can modify "+" lines for existing additions or modifications. In all cases, the new modification will appear reverted in the working tree. new content You may also add new content that does not exist in the patch; simply add new lines, each starting with "+". The addition will appear reverted in the working tree. There are also several operations which should be avoided entirely, as they will make the patch impossible to apply: * adding context (" ") or removal ("-") lines * deleting context or removal lines * modifying the contents of context or removal lines Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: add.ignoreErrors add.ignore-errors (deprecated) Tells `git add` to continue adding files when some files cannot be added due to indexing errors. Equivalent to the `--ignore-errors` option of [git-add[1]](git-add). `add.ignore-errors` is deprecated, as it does not follow the usual naming convention for configuration variables. add.interactive.useBuiltin Set to `false` to fall back to the original Perl implementation of the interactive version of [git-add[1]](git-add) instead of the built-in version. Is `true` by default. See also -------- [git-status[1]](git-status) [git-rm[1]](git-rm) [git-reset[1]](git-reset) [git-mv[1]](git-mv) [git-commit[1]](git-commit) [git-update-index[1]](git-update-index)
programming_docs
git git-merge git-merge ========= Name ---- git-merge - Join two or more development histories together Synopsis -------- ``` git merge [-n] [--stat] [--no-commit] [--squash] [--[no-]edit] [--no-verify] [-s <strategy>] [-X <strategy-option>] [-S[<keyid>]] [--[no-]allow-unrelated-histories] [--[no-]rerere-autoupdate] [-m <msg>] [-F <file>] [--into-name <branch>] [<commit>…​] git merge (--continue | --abort | --quit) ``` Description ----------- Incorporates changes from the named commits (since the time their histories diverged from the current branch) into the current branch. This command is used by `git pull` to incorporate changes from another repository and can be used by hand to merge changes from one branch into another. Assume the following history exists and the current branch is "`master`": ``` A---B---C topic / D---E---F---G master ``` Then "`git merge topic`" will replay the changes made on the `topic` branch since it diverged from `master` (i.e., `E`) until its current commit (`C`) on top of `master`, and record the result in a new commit along with the names of the two parent commits and a log message from the user describing the changes. ``` A---B---C topic / \ D---E---F---G---H master ``` The second syntax ("`git merge --abort`") can only be run after the merge has resulted in conflicts. `git merge --abort` will abort the merge process and try to reconstruct the pre-merge state. However, if there were uncommitted changes when the merge started (and especially if those changes were further modified after the merge was started), `git merge --abort` will in some cases be unable to reconstruct the original (pre-merge) changes. Therefore: **Warning**: Running `git merge` with non-trivial uncommitted changes is discouraged: while possible, it may leave you in a state that is hard to back out of in the case of a conflict. The third syntax ("`git merge --continue`") can only be run after the merge has resulted in conflicts. Options ------- --commit --no-commit Perform the merge and commit the result. This option can be used to override --no-commit. With --no-commit perform the merge and stop just before creating a merge commit, to give the user a chance to inspect and further tweak the merge result before committing. Note that fast-forward updates do not create a merge commit and therefore there is no way to stop those merges with --no-commit. Thus, if you want to ensure your branch is not changed or updated by the merge command, use --no-ff with --no-commit. --edit -e --no-edit Invoke an editor before committing successful mechanical merge to further edit the auto-generated merge message, so that the user can explain and justify the merge. The `--no-edit` option can be used to accept the auto-generated message (this is generally discouraged). The `--edit` (or `-e`) option is still useful if you are giving a draft message with the `-m` option from the command line and want to edit it in the editor. Older scripts may depend on the historical behaviour of not allowing the user to edit the merge log message. They will see an editor opened when they run `git merge`. To make it easier to adjust such scripts to the updated behaviour, the environment variable `GIT_MERGE_AUTOEDIT` can be set to `no` at the beginning of them. --cleanup=<mode> This option determines how the merge message will be cleaned up before committing. See [git-commit[1]](git-commit) for more details. In addition, if the `<mode>` is given a value of `scissors`, scissors will be appended to `MERGE_MSG` before being passed on to the commit machinery in the case of a merge conflict. --ff --no-ff --ff-only Specifies how a merge is handled when the merged-in history is already a descendant of the current history. `--ff` is the default unless merging an annotated (and possibly signed) tag that is not stored in its natural place in the `refs/tags/` hierarchy, in which case `--no-ff` is assumed. With `--ff`, when possible resolve the merge as a fast-forward (only update the branch pointer to match the merged branch; do not create a merge commit). When not possible (when the merged-in history is not a descendant of the current history), create a merge commit. With `--no-ff`, create a merge commit in all cases, even when the merge could instead be resolved as a fast-forward. With `--ff-only`, resolve the merge as a fast-forward when possible. When not possible, refuse to merge and exit with a non-zero status. -S[<keyid>] --gpg-sign[=<keyid>] --no-gpg-sign GPG-sign the resulting merge commit. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. `--no-gpg-sign` is useful to countermand both `commit.gpgSign` configuration variable, and earlier `--gpg-sign`. --log[=<n>] --no-log In addition to branch names, populate the log message with one-line descriptions from at most <n> actual commits that are being merged. See also [git-fmt-merge-msg[1]](git-fmt-merge-msg). With --no-log do not list one-line descriptions from the actual commits being merged. --signoff --no-signoff Add a `Signed-off-by` trailer by the committer at the end of the commit log message. The meaning of a signoff depends on the project to which you’re committing. For example, it may certify that the committer has the rights to submit the work under the project’s license or agrees to some contributor representation, such as a Developer Certificate of Origin. (See <http://developercertificate.org> for the one used by the Linux kernel and Git projects.) Consult the documentation or leadership of the project to which you’re contributing to understand how the signoffs are used in that project. The --no-signoff option can be used to countermand an earlier --signoff option on the command line. --stat -n --no-stat Show a diffstat at the end of the merge. The diffstat is also controlled by the configuration option merge.stat. With -n or --no-stat do not show a diffstat at the end of the merge. --squash --no-squash Produce the working tree and index state as if a real merge happened (except for the merge information), but do not actually make a commit, move the `HEAD`, or record `$GIT_DIR/MERGE_HEAD` (to cause the next `git commit` command to create a merge commit). This allows you to create a single commit on top of the current branch whose effect is the same as merging another branch (or more in case of an octopus). With --no-squash perform the merge and commit the result. This option can be used to override --squash. With --squash, --commit is not allowed, and will fail. --[no-]verify By default, the pre-merge and commit-msg hooks are run. When `--no-verify` is given, these are bypassed. See also [githooks[5]](githooks). -s <strategy> --strategy=<strategy> Use the given merge strategy; can be supplied more than once to specify them in the order they should be tried. If there is no `-s` option, a built-in list of strategies is used instead (`ort` when merging a single head, `octopus` otherwise). -X <option> --strategy-option=<option> Pass merge strategy specific option through to the merge strategy. --verify-signatures --no-verify-signatures Verify that the tip commit of the side branch being merged is signed with a valid key, i.e. a key that has a valid uid: in the default trust model, this means the signing key has been signed by a trusted key. If the tip commit of the side branch is not signed with a valid key, the merge is aborted. --summary --no-summary Synonyms to --stat and --no-stat; these are deprecated and will be removed in the future. -q --quiet Operate quietly. Implies --no-progress. -v --verbose Be verbose. --progress --no-progress Turn progress on/off explicitly. If neither is specified, progress is shown if standard error is connected to a terminal. Note that not all merge strategies may support progress reporting. --autostash --no-autostash Automatically create a temporary stash entry before the operation begins, record it in the special ref `MERGE_AUTOSTASH` and apply it after the operation ends. This means that you can run the operation on a dirty worktree. However, use with care: the final stash application after a successful merge might result in non-trivial conflicts. --allow-unrelated-histories By default, `git merge` command refuses to merge histories that do not share a common ancestor. This option can be used to override this safety when merging histories of two projects that started their lives independently. As that is a very rare occasion, no configuration variable to enable this by default exists and will not be added. -m <msg> Set the commit message to be used for the merge commit (in case one is created). If `--log` is specified, a shortlog of the commits being merged will be appended to the specified message. The `git fmt-merge-msg` command can be used to give a good default for automated `git merge` invocations. The automated message can include the branch description. --into-name <branch> Prepare the default merge message as if merging to the branch `<branch>`, instead of the name of the real branch to which the merge is made. -F <file> --file=<file> Read the commit message to be used for the merge commit (in case one is created). If `--log` is specified, a shortlog of the commits being merged will be appended to the specified message. --rerere-autoupdate --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. `--no-rerere-autoupdate` is a good way to double-check what `rerere` did and catch potential mismerges, before committing the result to the index with a separate `git add`. --overwrite-ignore --no-overwrite-ignore Silently overwrite ignored files from the merge result. This is the default behavior. Use `--no-overwrite-ignore` to abort. --abort Abort the current conflict resolution process, and try to reconstruct the pre-merge state. If an autostash entry is present, apply it to the worktree. If there were uncommitted worktree changes present when the merge started, `git merge --abort` will in some cases be unable to reconstruct these changes. It is therefore recommended to always commit or stash your changes before running `git merge`. `git merge --abort` is equivalent to `git reset --merge` when `MERGE_HEAD` is present unless `MERGE_AUTOSTASH` is also present in which case `git merge --abort` applies the stash entry to the worktree whereas `git reset --merge` will save the stashed changes in the stash list. --quit Forget about the current merge in progress. Leave the index and the working tree as-is. If `MERGE_AUTOSTASH` is present, the stash entry will be saved to the stash list. --continue After a `git merge` stops due to conflicts you can conclude the merge by running `git merge --continue` (see "HOW TO RESOLVE CONFLICTS" section below). <commit>…​ Commits, usually other branch heads, to merge into our branch. Specifying more than one commit will create a merge with more than two parents (affectionately called an Octopus merge). If no commit is given from the command line, merge the remote-tracking branches that the current branch is configured to use as its upstream. See also the configuration section of this manual page. When `FETCH_HEAD` (and no other commit) is specified, the branches recorded in the `.git/FETCH_HEAD` file by the previous invocation of `git fetch` for merging are merged to the current branch. Pre-merge checks ---------------- Before applying outside changes, you should get your own work in good shape and committed locally, so it will not be clobbered if there are conflicts. See also [git-stash[1]](git-stash). `git pull` and `git merge` will stop without doing anything when local uncommitted changes overlap with files that `git pull`/`git merge` may need to update. To avoid recording unrelated changes in the merge commit, `git pull` and `git merge` will also abort if there are any changes registered in the index relative to the `HEAD` commit. (Special narrow exceptions to this rule may exist depending on which merge strategy is in use, but generally, the index must match HEAD.) If all named commits are already ancestors of `HEAD`, `git merge` will exit early with the message "Already up to date." Fast-forward merge ------------------ Often the current branch head is an ancestor of the named commit. This is the most common case especially when invoked from `git pull`: you are tracking an upstream repository, you have committed no local changes, and now you want to update to a newer upstream revision. In this case, a new commit is not needed to store the combined history; instead, the `HEAD` (along with the index) is updated to point at the named commit, without creating an extra merge commit. This behavior can be suppressed with the `--no-ff` option. True merge ---------- Except in a fast-forward merge (see above), the branches to be merged must be tied together by a merge commit that has both of them as its parents. A merged version reconciling the changes from all branches to be merged is committed, and your `HEAD`, index, and working tree are updated to it. It is possible to have modifications in the working tree as long as they do not overlap; the update will preserve them. When it is not obvious how to reconcile the changes, the following happens: 1. The `HEAD` pointer stays the same. 2. The `MERGE_HEAD` ref is set to point to the other branch head. 3. Paths that merged cleanly are updated both in the index file and in your working tree. 4. For conflicting paths, the index file records up to three versions: stage 1 stores the version from the common ancestor, stage 2 from `HEAD`, and stage 3 from `MERGE_HEAD` (you can inspect the stages with `git ls-files -u`). The working tree files contain the result of the "merge" program; i.e. 3-way merge results with familiar conflict markers `<<<` `===` `>>>`. 5. No other changes are made. In particular, the local modifications you had before you started merge will stay the same and the index entries for them stay as they were, i.e. matching `HEAD`. If you tried a merge which resulted in complex conflicts and want to start over, you can recover with `git merge --abort`. Merging tag ----------- When merging an annotated (and possibly signed) tag, Git always creates a merge commit even if a fast-forward merge is possible, and the commit message template is prepared with the tag message. Additionally, if the tag is signed, the signature check is reported as a comment in the message template. See also [git-tag[1]](git-tag). When you want to just integrate with the work leading to the commit that happens to be tagged, e.g. synchronizing with an upstream release point, you may not want to make an unnecessary merge commit. In such a case, you can "unwrap" the tag yourself before feeding it to `git merge`, or pass `--ff-only` when you do not have any work on your own. e.g. ``` git fetch origin git merge v1.2.3^0 git merge --ff-only v1.2.3 ``` How conflicts are presented --------------------------- During a merge, the working tree files are updated to reflect the result of the merge. Among the changes made to the common ancestor’s version, non-overlapping ones (that is, you changed an area of the file while the other side left that area intact, or vice versa) are incorporated in the final result verbatim. When both sides made changes to the same area, however, Git cannot randomly pick one side over the other, and asks you to resolve it by leaving what both sides did to that area. By default, Git uses the same style as the one used by the "merge" program from the RCS suite to present such a conflicted hunk, like this: ``` Here are lines that are either unchanged from the common ancestor, or cleanly resolved because only one side changed, or cleanly resolved because both sides changed the same way. <<<<<<< yours:sample.txt Conflict resolution is hard; let's go shopping. ======= Git makes conflict resolution easy. >>>>>>> theirs:sample.txt And here is another line that is cleanly resolved or unmodified. ``` The area where a pair of conflicting changes happened is marked with markers `<<<<<<<`, `=======`, and `>>>>>>>`. The part before the `=======` is typically your side, and the part afterwards is typically their side. The default format does not show what the original said in the conflicting area. You cannot tell how many lines are deleted and replaced with Barbie’s remark on your side. The only thing you can tell is that your side wants to say it is hard and you’d prefer to go shopping, while the other side wants to claim it is easy. An alternative style can be used by setting the "merge.conflictStyle" configuration variable to either "diff3" or "zdiff3". In "diff3" style, the above conflict may look like this: ``` Here are lines that are either unchanged from the common ancestor, or cleanly resolved because only one side changed, <<<<<<< yours:sample.txt or cleanly resolved because both sides changed the same way. Conflict resolution is hard; let's go shopping. ||||||| base:sample.txt or cleanly resolved because both sides changed identically. Conflict resolution is hard. ======= or cleanly resolved because both sides changed the same way. Git makes conflict resolution easy. >>>>>>> theirs:sample.txt And here is another line that is cleanly resolved or unmodified. ``` while in "zdiff3" style, it may look like this: ``` Here are lines that are either unchanged from the common ancestor, or cleanly resolved because only one side changed, or cleanly resolved because both sides changed the same way. <<<<<<< yours:sample.txt Conflict resolution is hard; let's go shopping. ||||||| base:sample.txt or cleanly resolved because both sides changed identically. Conflict resolution is hard. ======= Git makes conflict resolution easy. >>>>>>> theirs:sample.txt And here is another line that is cleanly resolved or unmodified. ``` In addition to the `<<<<<<<`, `=======`, and `>>>>>>>` markers, it uses another `|||||||` marker that is followed by the original text. You can tell that the original just stated a fact, and your side simply gave in to that statement and gave up, while the other side tried to have a more positive attitude. You can sometimes come up with a better resolution by viewing the original. How to resolve conflicts ------------------------ After seeing a conflict, you can do two things: * Decide not to merge. The only clean-ups you need are to reset the index file to the `HEAD` commit to reverse 2. and to clean up working tree changes made by 2. and 3.; `git merge --abort` can be used for this. * Resolve the conflicts. Git will mark the conflicts in the working tree. Edit the files into shape and `git add` them to the index. Use `git commit` or `git merge --continue` to seal the deal. The latter command checks whether there is a (interrupted) merge in progress before calling `git commit`. You can work through the conflict with a number of tools: * Use a mergetool. `git mergetool` to launch a graphical mergetool which will work you through the merge. * Look at the diffs. `git diff` will show a three-way diff, highlighting changes from both the `HEAD` and `MERGE_HEAD` versions. * Look at the diffs from each branch. `git log --merge -p <path>` will show diffs first for the `HEAD` version and then the `MERGE_HEAD` version. * Look at the originals. `git show :1:filename` shows the common ancestor, `git show :2:filename` shows the `HEAD` version, and `git show :3:filename` shows the `MERGE_HEAD` version. Examples -------- * Merge branches `fixes` and `enhancements` on top of the current branch, making an octopus merge: ``` $ git merge fixes enhancements ``` * Merge branch `obsolete` into the current branch, using `ours` merge strategy: ``` $ git merge -s ours obsolete ``` * Merge branch `maint` into the current branch, but do not make a new commit automatically: ``` $ git merge --no-commit maint ``` This can be used when you want to include further changes to the merge, or want to write your own merge commit message. You should refrain from abusing this option to sneak substantial changes into a merge commit. Small fixups like bumping release/version name would be acceptable. Merge strategies ---------------- The merge mechanism (`git merge` and `git pull` commands) allows the backend `merge strategies` to be chosen with `-s` option. Some strategies can also take their own options, which can be passed by giving `-X<option>` arguments to `git merge` and/or `git pull`. ort This is the default merge strategy when pulling or merging one branch. This strategy can only resolve two heads using a 3-way merge algorithm. When there is more than one common ancestor that can be used for 3-way merge, it creates a merged tree of the common ancestors and uses that as the reference tree for the 3-way merge. This has been reported to result in fewer merge conflicts without causing mismerges by tests done on actual merge commits taken from Linux 2.6 kernel development history. Additionally this strategy can detect and handle merges involving renames. It does not make use of detected copies. The name for this algorithm is an acronym ("Ostensibly Recursive’s Twin") and came from the fact that it was written as a replacement for the previous default algorithm, `recursive`. The `ort` strategy can take the following options: ours This option forces conflicting hunks to be auto-resolved cleanly by favoring `our` version. Changes from the other tree that do not conflict with our side are reflected in the merge result. For a binary file, the entire contents are taken from our side. This should not be confused with the `ours` merge strategy, which does not even look at what the other tree contains at all. It discards everything the other tree did, declaring `our` history contains all that happened in it. theirs This is the opposite of `ours`; note that, unlike `ours`, there is no `theirs` merge strategy to confuse this merge option with. ignore-space-change ignore-all-space ignore-space-at-eol ignore-cr-at-eol Treats lines with the indicated type of whitespace change as unchanged for the sake of a three-way merge. Whitespace changes mixed with other changes to a line are not ignored. See also [git-diff[1]](git-diff) `-b`, `-w`, `--ignore-space-at-eol`, and `--ignore-cr-at-eol`. * If `their` version only introduces whitespace changes to a line, `our` version is used; * If `our` version introduces whitespace changes but `their` version includes a substantial change, `their` version is used; * Otherwise, the merge proceeds in the usual way. renormalize This runs a virtual check-out and check-in of all three stages of a file when resolving a three-way merge. This option is meant to be used when merging branches with different clean filters or end-of-line normalization rules. See "Merging branches with differing checkin/checkout attributes" in [gitattributes[5]](gitattributes) for details. no-renormalize Disables the `renormalize` option. This overrides the `merge.renormalize` configuration variable. find-renames[=<n>] Turn on rename detection, optionally setting the similarity threshold. This is the default. This overrides the `merge.renames` configuration variable. See also [git-diff[1]](git-diff) `--find-renames`. rename-threshold=<n> Deprecated synonym for `find-renames=<n>`. subtree[=<path>] This option is a more advanced form of `subtree` strategy, where the strategy makes a guess on how two trees must be shifted to match with each other when merging. Instead, the specified path is prefixed (or stripped from the beginning) to make the shape of two trees to match. recursive This can only resolve two heads using a 3-way merge algorithm. When there is more than one common ancestor that can be used for 3-way merge, it creates a merged tree of the common ancestors and uses that as the reference tree for the 3-way merge. This has been reported to result in fewer merge conflicts without causing mismerges by tests done on actual merge commits taken from Linux 2.6 kernel development history. Additionally this can detect and handle merges involving renames. It does not make use of detected copies. This was the default strategy for resolving two heads from Git v0.99.9k until v2.33.0. The `recursive` strategy takes the same options as `ort`. However, there are three additional options that `ort` ignores (not documented above) that are potentially useful with the `recursive` strategy: patience Deprecated synonym for `diff-algorithm=patience`. diff-algorithm=[patience|minimal|histogram|myers] Use a different diff algorithm while merging, which can help avoid mismerges that occur due to unimportant matching lines (such as braces from distinct functions). See also [git-diff[1]](git-diff) `--diff-algorithm`. Note that `ort` specifically uses `diff-algorithm=histogram`, while `recursive` defaults to the `diff.algorithm` config setting. no-renames Turn off rename detection. This overrides the `merge.renames` configuration variable. See also [git-diff[1]](git-diff) `--no-renames`. resolve This can only resolve two heads (i.e. the current branch and another branch you pulled from) using a 3-way merge algorithm. It tries to carefully detect criss-cross merge ambiguities. It does not handle renames. octopus This resolves cases with more than two heads, but refuses to do a complex merge that needs manual resolution. It is primarily meant to be used for bundling topic branch heads together. This is the default merge strategy when pulling or merging more than one branch. ours This resolves any number of heads, but the resulting tree of the merge is always that of the current branch head, effectively ignoring all changes from all other branches. It is meant to be used to supersede old development history of side branches. Note that this is different from the -Xours option to the `recursive` merge strategy. subtree This is a modified `ort` strategy. When merging trees A and B, if B corresponds to a subtree of A, B is first adjusted to match the tree structure of A, instead of reading the trees at the same level. This adjustment is also done to the common ancestor tree. With the strategies that use 3-way merge (including the default, `ort`), if a change is made on both branches, but later reverted on one of the branches, that change will be present in the merged result; some people find this behavior confusing. It occurs because only the heads and the merge base are considered when performing a merge, not the individual commits. The merge algorithm therefore considers the reverted change as no change at all, and substitutes the changed version instead. Configuration ------------- branch.<name>.mergeOptions Sets default options for merging into branch <name>. The syntax and supported options are the same as those of `git merge`, but option values containing whitespace characters are currently not supported. Everything above this line in this section isn’t included from the [git-config[1]](git-config) documentation. The content that follows is the same as what’s found there: merge.conflictStyle Specify the style in which conflicted hunks are written out to working tree files upon merge. The default is "merge", which shows a `<<<<<<<` conflict marker, changes made by one side, a `=======` marker, changes made by the other side, and then a `>>>>>>>` marker. An alternate style, "diff3", adds a `|||||||` marker and the original text before the `=======` marker. The "merge" style tends to produce smaller conflict regions than diff3, both because of the exclusion of the original text, and because when a subset of lines match on the two sides they are just pulled out of the conflict region. Another alternate style, "zdiff3", is similar to diff3 but removes matching lines on the two sides from the conflict region when those matching lines appear near either the beginning or end of a conflict region. merge.defaultToUpstream If merge is called without any commit argument, merge the upstream branches configured for the current branch by using their last observed values stored in their remote-tracking branches. The values of the `branch.<current branch>.merge` that name the branches at the remote named by `branch.<current branch>.remote` are consulted, and then they are mapped via `remote.<remote>.fetch` to their corresponding remote-tracking branches, and the tips of these tracking branches are merged. Defaults to true. merge.ff By default, Git does not create an extra merge commit when merging a commit that is a descendant of the current commit. Instead, the tip of the current branch is fast-forwarded. When set to `false`, this variable tells Git to create an extra merge commit in such a case (equivalent to giving the `--no-ff` option from the command line). When set to `only`, only such fast-forward merges are allowed (equivalent to giving the `--ff-only` option from the command line). merge.verifySignatures If true, this is equivalent to the --verify-signatures command line option. See [git-merge[1]](git-merge) for details. merge.branchdesc In addition to branch names, populate the log message with the branch description text associated with them. Defaults to false. merge.log In addition to branch names, populate the log message with at most the specified number of one-line descriptions from the actual commits that are being merged. Defaults to false, and true is a synonym for 20. merge.suppressDest By adding a glob that matches the names of integration branches to this multi-valued configuration variable, the default merge message computed for merges into these integration branches will omit "into <branch name>" from its title. An element with an empty value can be used to clear the list of globs accumulated from previous configuration entries. When there is no `merge.suppressDest` variable defined, the default value of `master` is used for backward compatibility. merge.renameLimit The number of files to consider in the exhaustive portion of rename detection during a merge. If not specified, defaults to the value of diff.renameLimit. If neither merge.renameLimit nor diff.renameLimit are specified, currently defaults to 7000. This setting has no effect if rename detection is turned off. merge.renames Whether Git detects renames. If set to "false", rename detection is disabled. If set to "true", basic rename detection is enabled. Defaults to the value of diff.renames. merge.directoryRenames Whether Git detects directory renames, affecting what happens at merge time to new files added to a directory on one side of history when that directory was renamed on the other side of history. If merge.directoryRenames is set to "false", directory rename detection is disabled, meaning that such new files will be left behind in the old directory. If set to "true", directory rename detection is enabled, meaning that such new files will be moved into the new directory. If set to "conflict", a conflict will be reported for such paths. If merge.renames is false, merge.directoryRenames is ignored and treated as false. Defaults to "conflict". merge.renormalize Tell Git that canonical representation of files in the repository has changed over time (e.g. earlier commits record text files with CRLF line endings, but recent ones use LF line endings). In such a repository, Git can convert the data recorded in commits to a canonical form before performing a merge to reduce unnecessary conflicts. For more information, see section "Merging branches with differing checkin/checkout attributes" in [gitattributes[5]](gitattributes). merge.stat Whether to print the diffstat between ORIG\_HEAD and the merge result at the end of the merge. True by default. merge.autoStash When set to true, automatically create a temporary stash entry before the operation begins, and apply it after the operation ends. This means that you can run merge on a dirty worktree. However, use with care: the final stash application after a successful merge might result in non-trivial conflicts. This option can be overridden by the `--no-autostash` and `--autostash` options of [git-merge[1]](git-merge). Defaults to false. merge.tool Controls which merge tool is used by [git-mergetool[1]](git-mergetool). The list below shows the valid built-in values. Any other value is treated as a custom merge tool and requires that a corresponding mergetool.<tool>.cmd variable is defined. merge.guitool Controls which merge tool is used by [git-mergetool[1]](git-mergetool) when the -g/--gui flag is specified. The list below shows the valid built-in values. Any other value is treated as a custom merge tool and requires that a corresponding mergetool.<guitool>.cmd variable is defined. * araxis * bc * codecompare * deltawalker * diffmerge * diffuse * ecmerge * emerge * examdiff * guiffy * gvimdiff * kdiff3 * meld * nvimdiff * opendiff * p4merge * smerge * tkdiff * tortoisemerge * vimdiff * winmerge * xxdiff merge.verbosity Controls the amount of output shown by the recursive merge strategy. Level 0 outputs nothing except a final error message if conflicts were detected. Level 1 outputs only conflicts, 2 outputs conflicts and file changes. Level 5 and above outputs debugging information. The default is level 2. Can be overridden by the `GIT_MERGE_VERBOSITY` environment variable. merge.<driver>.name Defines a human-readable name for a custom low-level merge driver. See [gitattributes[5]](gitattributes) for details. merge.<driver>.driver Defines the command that implements a custom low-level merge driver. See [gitattributes[5]](gitattributes) for details. merge.<driver>.recursive Names a low-level merge driver to be used when performing an internal merge between common ancestors. See [gitattributes[5]](gitattributes) for details. See also -------- [git-fmt-merge-msg[1]](git-fmt-merge-msg), [git-pull[1]](git-pull), [gitattributes[5]](gitattributes), [git-reset[1]](git-reset), [git-diff[1]](git-diff), [git-ls-files[1]](git-ls-files), [git-add[1]](git-add), [git-rm[1]](git-rm), [git-mergetool[1]](git-mergetool)
programming_docs
git git-count-objects git-count-objects ================= Name ---- git-count-objects - Count unpacked number of objects and their disk consumption Synopsis -------- ``` git count-objects [-v] [-H | --human-readable] ``` Description ----------- This counts the number of unpacked object files and disk space consumed by them, to help you decide when it is a good time to repack. Options ------- -v --verbose Report in more detail: count: the number of loose objects size: disk space consumed by loose objects, in KiB (unless -H is specified) in-pack: the number of in-pack objects size-pack: disk space consumed by the packs, in KiB (unless -H is specified) prune-packable: the number of loose objects that are also present in the packs. These objects could be pruned using `git prune-packed`. garbage: the number of files in object database that are neither valid loose objects nor valid packs size-garbage: disk space consumed by garbage files, in KiB (unless -H is specified) alternate: absolute path of alternate object databases; may appear multiple times, one line per path. Note that if the path contains non-printable characters, it may be surrounded by double-quotes and contain C-style backslashed escape sequences. -H --human-readable Print sizes in human readable format git git-mailinfo git-mailinfo ============ Name ---- git-mailinfo - Extracts patch and authorship from a single e-mail message Synopsis -------- ``` git mailinfo [-k|-b] [-u | --encoding=<encoding> | -n] [--[no-]scissors] [--quoted-cr=<action>] <msg> <patch> ``` Description ----------- Reads a single e-mail message from the standard input, and writes the commit log message in <msg> file, and the patches in <patch> file. The author name, e-mail and e-mail subject are written out to the standard output to be used by `git am` to create a commit. It is usually not necessary to use this command directly. See [git-am[1]](git-am) instead. Options ------- -k Usually the program removes email cruft from the Subject: header line to extract the title line for the commit log message. This option prevents this munging, and is most useful when used to read back `git format-patch -k` output. Specifically, the following are removed until none of them remain: * Leading and trailing whitespace. * Leading `Re:`, `re:`, and `:`. * Leading bracketed strings (between `[` and `]`, usually `[PATCH]`). Finally, runs of whitespace are normalized to a single ASCII space character. -b When -k is not in effect, all leading strings bracketed with `[` and `]` pairs are stripped. This option limits the stripping to only the pairs whose bracketed string contains the word "PATCH". -u The commit log message, author name and author email are taken from the e-mail, and after minimally decoding MIME transfer encoding, re-coded in the charset specified by `i18n.commitEncoding` (defaulting to UTF-8) by transliterating them. This used to be optional but now it is the default. Note that the patch is always used as-is without charset conversion, even with this flag. --encoding=<encoding> Similar to -u. But when re-coding, the charset specified here is used instead of the one specified by `i18n.commitEncoding` or UTF-8. -n Disable all charset re-coding of the metadata. -m --message-id Copy the Message-ID header at the end of the commit message. This is useful in order to associate commits with mailing list discussions. --scissors Remove everything in body before a scissors line (e.g. "-- >8 --"). The line represents scissors and perforation marks, and is used to request the reader to cut the message at that line. If that line appears in the body of the message before the patch, everything before it (including the scissors line itself) is ignored when this option is used. This is useful if you want to begin your message in a discussion thread with comments and suggestions on the message you are responding to, and to conclude it with a patch submission, separating the discussion and the beginning of the proposed commit log message with a scissors line. This can be enabled by default with the configuration option mailinfo.scissors. --no-scissors Ignore scissors lines. Useful for overriding mailinfo.scissors settings. --quoted-cr=<action> Action when processes email messages sent with base64 or quoted-printable encoding, and the decoded lines end with a CRLF instead of a simple LF. The valid actions are: * `nowarn`: Git will do nothing when such a CRLF is found. * `warn`: Git will issue a warning for each message if such a CRLF is found. * `strip`: Git will convert those CRLF to LF. The default action could be set by configuration option `mailinfo.quotedCR`. If no such configuration option has been set, `warn` will be used. <msg> The commit log message extracted from e-mail, usually except the title line which comes from e-mail Subject. <patch> The patch extracted from e-mail. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: mailinfo.scissors If true, makes [git-mailinfo[1]](git-mailinfo) (and therefore [git-am[1]](git-am)) act by default as if the --scissors option was provided on the command-line. When active, this features removes everything from the message body before a scissors line (i.e. consisting mainly of ">8", "8<" and "-"). git git-diff-index git-diff-index ============== Name ---- git-diff-index - Compare a tree to the working tree or index Synopsis -------- ``` git diff-index [-m] [--cached] [--merge-base] [<common-diff-options>] <tree-ish> [<path>…​] ``` Description ----------- Compares the content and mode of the blobs found in a tree object with the corresponding tracked files in the working tree, or with the corresponding paths in the index. When <path> arguments are present, compares only paths matching those patterns. Otherwise all tracked files are compared. Options ------- -p -u --patch Generate patch (see section on generating patches). -s --no-patch Suppress diff output. Useful for commands like `git show` that show the patch by default, or to cancel the effect of `--patch`. -U<n> --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies `--patch`. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char> --output-indicator-old=<char> --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are `+`, `-` and ' ' respectively. --raw Generate the diff in raw format. This is the default. --patch-with-raw Synonym for `-p --raw`. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the `diff.algorithm` variable to a non-default value and want to use the default one, then you have to use `--diff-algorithm=default` option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by `<width>`. The width of the filename part can be limited by giving another width `<name-width>` after a comma. The width of the graph part can be limited by using `--stat-graph-width=<width>` (affects all commands generating a stat graph) or by setting `diff.statGraphWidth=<width>` (does not affect `git format-patch`). By giving a third parameter `<count>`, you can limit the output to the first `<count>` lines, followed by `...` if there are more. These parameters can also be set individually with `--stat-width=<width>`, `--stat-name-width=<name-width>` and `--stat-count=<count>`. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies `--stat`. --numstat Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. --shortstat Output only the last line of the `--stat` format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,…​>] --dirstat[=<param1,param2,…​>] Output the distribution of relative amount of changes for each sub-directory. The behavior of `--dirstat` can be customized by passing it a comma separated list of parameters. The defaults are controlled by the `diff.dirstat` configuration variable (see [git-config[1]](git-config)). The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `--dirstat=files,10,cumulative`. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>…​] Synonym for --dirstat=files,param1,param2…​ --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for `-p --stat`. -z When `--raw`, `--numstat`, `--name-only` or `--name-status` has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the [git-log[1]](git-log) manual page. --name-status Show only names and status of changed files. See the description of the `--diff-filter` option on what the status letters mean. Just like `--name-only` the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying `--submodule=short` the `short` format is used. This format just shows the names of the commits at the beginning and end of the range. When `--submodule` or `--submodule=log` is specified, the `log` format is used. This format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. When `--submodule=diff` is specified, the `diff` format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to `diff.submodule` or the `short` format if the config option is unset. --color[=<when>] Show colored diff. `--color` (i.e. without `=<when>`) is the same as `--color=always`. `<when>` can be one of `always`, `never`, or `auto`. --no-color Turn off colored diff. It is the same as `--color=never`. --color-moved[=<mode>] Moved lines of code are colored differently. The <mode> defaults to `no` if the option is not given and to `zebra` if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for `zebra`. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with `color.diff.newMoved`. Similarly `color.diff.oldMoved` will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the `color.diff.{old,new}Moved` color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in `blocks` mode. The blocks are painted using either the `color.diff.{old,new}Moved` color or `color.diff.{old,new}MovedAlternative`. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to `zebra`, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. `dimmed_zebra` is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as `--color-moved=no`. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for `--color-moved`. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as `--color-moved-ws=no`. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see `--word-diff-regex` below. The <mode> defaults to `plain`, and must be one of: color Highlight changed words using only colors. Implies `--color`. plain Show words as `[-removed-]` and `{+added+}`. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a `+`/`-`/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde `~` on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies `--word-diff` unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append `|[^[:space:]]` to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, `--word-diff-regex=.` will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see [gitattributes[5]](gitattributes) or [git-config[1]](git-config). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to `--word-diff=color` plus (if a regex was specified) `--word-diff-regex=<regex>`. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by `core.whitespace` configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. When this option is not given, and the configuration variable `diff.wsErrorHighlight` is not set, only whitespace errors in `new` lines are highlighted. The whitespace errors are colored with `color.diff.whitespace`. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to `--full-index`, output a binary diff that can be applied with `git-apply`. Implies `--patch`. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. In diff-patch output format, `--full-index` takes higher precedence, i.e. if `--full-index` is specified, full blob names will be shown regardless of `--abbrev`. Non default number of digits can be specified with `--abbrev=<n>`. -B[<n>][/<m>] --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number `m` controls this aspect of the -B option (defaults to 60%). `-B/70%` specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number `n` controls this aspect of the -B option (defaults to 50%). `-B20%` specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>] --find-renames[=<n>] Detect renames. If `n` is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, `-M90%` means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a `%` sign, the number is to be read as a fraction, with a decimal point before it. I.e., `-M5` becomes 0.5, and is thus the same as `-M50%`. Similarly, `-M05` is the same as `-M5%`. To limit detection to exact renames, use `-M100%`. The default similarity index is 50%. -C[<n>] --find-copies[=<n>] Detect copies as well as renames. See also `--find-copies-harder`. If `n` is specified, it has the same meaning as for `-M<n>`. --find-copies-harder For performance reasons, by default, `-C` option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one `-C` option has the same effect. -D --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and `/dev/null`. The resulting patch is not meant to be applied with `patch` or `git apply`; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with `-B`, omit also the preimage in the deletion part of a delete/create pair. -l<num> The `-M` and `-C` options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)…​[\*]] Select only files that are Added (`A`), Copied (`C`), Deleted (`D`), Modified (`M`), Renamed (`R`), have their type (i.e. regular file, symlink, submodule, …​) changed (`T`), are Unmerged (`U`), are Unknown (`X`), or have had their pairing Broken (`B`). Any combination of the filter characters (including none) can be used. When `*` (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. `--diff-filter=ad` excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into `-S`, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between `-S<regex> --pickaxe-regex` and `-G<regex>`, consider a commit with the following diff in the same file: ``` + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); ``` While `git log -G"frotz\(nitfol"` will show this commit, `git log -S"frotz\(nitfol" --pickaxe-regex` will not (because the number of occurrences of that string did not change). Unless `--text` is supplied patches of binary files without a textconv filter will be ignored. See the `pickaxe` entry in [gitdiffcore[7]](gitdiffcore) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to `-S`, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the `-t` option in `git-log` to also find trees. --pickaxe-all When `-S` or `-G` finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to `-S` as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the `diff.orderFile` configuration variable (see [git-config[1]](git-config)). To cancel `diff.orderFile`, use `-O/dev/null`. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: * Blank lines are ignored, so they can be used as separators for readability. * Lines starting with a hash ("`#`") are ignored, so they can be used for comments. Add a backslash ("`\`") to the beginning of the pattern if it starts with a hash. * Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM\_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "`foo*bar`" matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`". --skip-to=<file> --rotate-to=<file> Discard the files before the named <file> from the output (i.e. `skip to`), or move them to the end of the output (i.e. `rotate to`). These were invented primarily for use of the `git difftool` command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>] --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. `--no-relative` can be used to countermand both `diff.relative` config option and previous `--relative`. -a --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex> --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to `diff.interHunkContext` or 0 if the config option is unset. -W --function-context Show whole function as context lines for each change. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies `--exit-code`. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with [gitattributes[5]](gitattributes), you need to use this option with [git-log[1]](git-log) and friends. --no-ext-diff Disallow external diff drivers. --textconv --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See [gitattributes[5]](gitattributes) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for [git-diff[1]](git-diff) and [git-log[1]](git-log), but not for [git-format-patch[1]](git-format-patch) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the `ignore` option in [git-config[1]](git-config) or [gitmodules[5]](gitmodules). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with `--ita-visible-in-index`. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also [gitdiffcore[7]](gitdiffcore). <tree-ish> The id of a tree object to diff against. --cached Do not consider the on-disk file at all. --merge-base Instead of comparing <tree-ish> directly, use the merge base between <tree-ish> and HEAD instead. <tree-ish> must be a commit. -m By default, files recorded in the index but not checked out are reported as deleted. This flag makes `git diff-index` say that all non-checked-out files are up to date. Raw output format ----------------- The raw output format from "git-diff-index", "git-diff-tree", "git-diff-files" and "git diff --raw" are very similar. These commands all compare two sets of things; what is compared differs: git-diff-index <tree-ish> compares the <tree-ish> and the files on the filesystem. git-diff-index --cached <tree-ish> compares the <tree-ish> and the index. git-diff-tree [-r] <tree-ish-1> <tree-ish-2> [<pattern>…​] compares the trees named by the two arguments. git-diff-files [<pattern>…​] compares the index and the files on the filesystem. The "git-diff-tree" command begins its output by printing the hash of what is being compared. After that, all the commands print one output line per changed file. An output line is formatted this way: ``` in-place edit :100644 100644 bcd1234 0123456 M file0 copy-edit :100644 100644 abcd123 1234567 C68 file1 file2 rename-edit :100644 100644 abcd123 1234567 R86 file1 file3 create :000000 100644 0000000 1234567 A file4 delete :100644 000000 1234567 0000000 D file5 unmerged :000000 000000 0000000 0000000 U file6 ``` That is, from the left to the right: 1. a colon. 2. mode for "src"; 000000 if creation or unmerged. 3. a space. 4. mode for "dst"; 000000 if deletion or unmerged. 5. a space. 6. sha1 for "src"; 0{40} if creation or unmerged. 7. a space. 8. sha1 for "dst"; 0{40} if deletion, unmerged or "work tree out of sync with the index". 9. a space. 10. status, followed by optional "score" number. 11. a tab or a NUL when `-z` option is used. 12. path for "src" 13. a tab or a NUL when `-z` option is used; only exists for C or R. 14. path for "dst"; only exists for C or R. 15. an LF or a NUL when `-z` option is used, to terminate the record. Possible status letters are: * A: addition of a file * C: copy of a file into a new one * D: deletion of a file * M: modification of the contents or mode of a file * R: renaming of a file * T: change in the type of the file (regular file, symbolic link or submodule) * U: file is unmerged (you must complete the merge before it can be committed) * X: "unknown" change type (most probably a bug, please report it) Status letters C and R are always followed by a score (denoting the percentage of similarity between the source and target of the move or copy). Status letter M may be followed by a score (denoting the percentage of dissimilarity) for file rewrites. The sha1 for "dst" is shown as all 0’s if a file on the filesystem is out of sync with the index. Example: ``` :100644 100644 5be4a4a 0000000 M file.c ``` Without the `-z` option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). Using `-z` the filename is output verbatim and the line is terminated by a NUL byte. Diff format for merges ---------------------- "git-diff-tree", "git-diff-files" and "git-diff --raw" can take `-c` or `--cc` option to generate diff output also for merge commits. The output differs from the format described above in the following way: 1. there is a colon for each parent 2. there are more "src" modes and "src" sha1 3. status is concatenated status characters for each parent 4. no optional "score" number 5. tab-separated pathname(s) of the file For `-c` and `--cc`, only the destination or final path is shown even if the file was renamed on any side of history. With `--combined-all-paths`, the name of the path in each parent is shown followed by the name of the path in the merge commit. Examples for `-c` and `--cc` without `--combined-all-paths`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR phooey.c ``` Examples when `--combined-all-paths` added to either `-c` or `--cc`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c desc.c desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM foo.sh bar.sh bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR fooey.c fuey.c phooey.c ``` Note that `combined diff` lists only files which were modified from all parents. Generating patch text with -p ----------------------------- Running [git-diff[1]](git-diff), [git-log[1]](git-log), [git-show[1]](git-show), [git-diff-index[1]](git-diff-index), [git-diff-tree[1]](git-diff-tree), or [git-diff-files[1]](git-diff-files) with the `-p` option produces patch text. You can customize the creation of patch text via the `GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables (see [git[1]](git)), and the `diff` attribute (see [gitattributes[5]](gitattributes)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded with a "git diff" header that looks like this: ``` diff --git a/file1 b/file2 ``` The `a/` and `b/` filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, `/dev/null` is `not` used in place of the `a/` or `b/` filenames. When rename/copy is involved, `file1` and `file2` show the name of the source file of the rename/copy and the name of the file that rename/copy produces, respectively. 2. It is followed by one or more extended header lines: ``` old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> ``` File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the `a/` and `b/` prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). 4. All the `file1` files in the output refer to files before the commit, and all the `file2` files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: ``` diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a ``` 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in [gitattributes[5]](gitattributes) for details of how to tailor to this to specific languages. Combined diff format -------------------- Any diff-generating command can take the `-c` or `--cc` option to produce a `combined diff` when showing a merge. This is the default format when showing merges with [git-diff[1]](git-diff) or [git-show[1]](git-show). Note also that you can give suitable `--diff-merges` option to any of these commands to force generation of diffs in specific format. A "combined diff" format looks like this: ``` diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); ``` 1. It is preceded with a "git diff" header, that looks like this (when the `-c` option is used): ``` diff --combined file ``` or like this (when the `--cc` option is used): ``` diff --cc file ``` 2. It is followed by one or more extended header lines (this example shows a merge with two parents): ``` index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> ``` The `mode <mode>,<mode>..<mode>` line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected contents movement (renames and copying detection) are designed to work with diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by two-line from-file/to-file header ``` --- a/file +++ b/file ``` Similar to two-line header for traditional `unified` diff format, `/dev/null` is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file you get a N+1 line from-file/to-file header, where N is the number of parents in the merge commit ``` --- a/file --- a/file --- a/file +++ b/file ``` This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to `patch -p1`. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended `index` header: ``` @@@ <from-file-range> <from-file-range> <to-file-range> @@@ ``` There are (number of parents + 1) `@` characters in the chunk header for combined diff format. Unlike the traditional `unified` diff format, which shows two files A and B with a single column that has `-` (minus — appears in A but removed in B), `+` (plus — missing in A but added to B), or `" "` (space — unchanged) prefix, this format compares two or more files file1, file2,…​ with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how X’s line is different from it. A `-` character in the column N means that the line appears in fileN but it does not appear in the result. A `+` character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two `-` removals from both file1 and file2, plus `++` to mean one line that was added does not appear in either file1 or file2). Also eight other lines are the same from file1 but do not appear in file2 (hence prefixed with `+`). When shown by `git diff-tree -c`, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by `git diff-files -c`, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). Other diff formats ------------------ The `--summary` option describes newly added, deleted, renamed and copied files. The `--stat` option adds diffstat(1) graph to the output. These options can be combined with other options, such as `-p`, and are meant for human consumption. When showing a change that involves a rename or a copy, `--stat` output formats the pathnames compactly by combining common prefix and suffix of the pathnames. For example, a change that moves `arch/i386/Makefile` to `arch/x86/Makefile` while modifying 4 lines will be shown like this: ``` arch/{i386 => x86}/Makefile | 4 +-- ``` The `--numstat` option gives the diffstat(1) information but is designed for easier machine consumption. An entry in `--numstat` output looks like this: ``` 1 2 README 3 1 arch/{i386 => x86}/Makefile ``` That is, from left to right: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. pathname (possibly with rename/copy information); 6. a newline. When `-z` output option is in effect, the output is formatted this way: ``` 1 2 README NUL 3 1 NUL arch/i386/Makefile NUL arch/x86/Makefile NUL ``` That is: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. a NUL (only exists if renamed/copied); 6. pathname in preimage; 7. a NUL (only exists if renamed/copied); 8. pathname in postimage (only exists if renamed/copied); 9. a NUL. The extra `NUL` before the preimage path in renamed case is to allow scripts that read the output to tell if the current record being read is a single-path record or a rename/copy record without reading ahead. After reading added and deleted lines, reading up to `NUL` would yield the pathname, but if that is `NUL`, the record will show two paths. Operating modes --------------- You can choose whether you want to trust the index file entirely (using the `--cached` flag) or ask the diff logic to show any files that don’t match the stat state as being "tentatively changed". Both of these operations are very useful indeed. Cached mode ----------- If `--cached` is specified, it allows you to ask: ``` show me the differences between HEAD and the current index contents (the ones I'd write using 'git write-tree') ``` For example, let’s say that you have worked on your working directory, updated some files in the index and are ready to commit. You want to see exactly **what** you are going to commit, without having to write a new tree object and compare it that way, and to do that, you just do ``` git diff-index --cached HEAD ``` Example: let’s say I had renamed `commit.c` to `git-commit.c`, and I had done an `update-index` to make that effective in the index file. `git diff-files` wouldn’t show anything at all, since the index file matches my working directory. But doing a `git diff-index` does: ``` torvalds@ppc970:~/git> git diff-index --cached HEAD :100644 000000 4161aecc6700a2eb579e842af0b7f22b98443f74 0000000000000000000000000000000000000000 D commit.c :000000 100644 0000000000000000000000000000000000000000 4161aecc6700a2eb579e842af0b7f22b98443f74 A git-commit.c ``` You can see easily that the above is a rename. In fact, `git diff-index --cached` **should** always be entirely equivalent to actually doing a `git write-tree` and comparing that. Except this one is much nicer for the case where you just want to check where you are. So doing a `git diff-index --cached` is basically very useful when you are asking yourself "what have I already marked for being committed, and what’s the difference to a previous tree". Non-cached mode --------------- The "non-cached" mode takes a different approach, and is potentially the more useful of the two in that what it does can’t be emulated with a `git write-tree` + `git diff-tree`. Thus that’s the default mode. The non-cached version asks the question: ``` show me the differences between HEAD and the currently checked out tree - index contents _and_ files that aren't up to date ``` which is obviously a very useful question too, since that tells you what you **could** commit. Again, the output matches the `git diff-tree -r` output to a tee, but with a twist. The twist is that if some file doesn’t match the index, we don’t have a backing store thing for it, and we use the magic "all-zero" sha1 to show that. So let’s say that you have edited `kernel/sched.c`, but have not actually done a `git update-index` on it yet - there is no "object" associated with the new state, and you get: ``` torvalds@ppc970:~/v2.6/linux> git diff-index --abbrev HEAD :100644 100644 7476bb5ba 000000000 M kernel/sched.c ``` i.e., it shows that the tree has changed, and that `kernel/sched.c` is not up to date and may contain new stuff. The all-zero sha1 means that to get the real diff, you need to look at the object in the working directory directly rather than do an object-to-object diff. | | | | --- | --- | | Note | As with other commands of this type, *git diff-index* does not actually look at the contents of the file at all. So maybe `kernel/sched.c` hasn’t actually changed, and it’s just that you touched it. In either case, it’s a note that you need to *git update-index* it to make the index be in sync. | | | | | --- | --- | | Note | You can have a mixture of files show up as "has been updated" and "is still dirty in the working directory" together. You can always tell which file is in which state, since the "has been updated" ones show a valid sha1, and the "not in sync with the index" ones will always have the special all-zero sha1. |
programming_docs
git gitrevisions gitrevisions ============ Name ---- gitrevisions - Specifying revisions and ranges for Git Synopsis -------- gitrevisions Description ----------- Many Git commands take revision parameters as arguments. Depending on the command, they denote a specific commit or, for commands which walk the revision graph (such as [git-log[1]](git-log)), all commits which are reachable from that commit. For commands that walk the revision graph one can also specify a range of revisions explicitly. In addition, some Git commands (such as [git-show[1]](git-show) and [git-push[1]](git-push)) can also take revision parameters which denote other objects than commits, e.g. blobs ("files") or trees ("directories of files"). Specifying revisions -------------------- A revision parameter `<rev>` typically, but not necessarily, names a commit object. It uses what is called an `extended SHA-1` syntax. Here are various ways to spell object names. The ones listed near the end of this list name trees and blobs contained in a commit. | | | | --- | --- | | Note | This document shows the "raw" syntax as seen by git. The shell and other UIs might require additional quoting to protect special characters and to avoid word splitting. | *<sha1>*, e.g. *dae86e1950b1277e545cee180551750029cfe735*, *dae86e* The full SHA-1 object name (40-byte hexadecimal string), or a leading substring that is unique within the repository. E.g. dae86e1950b1277e545cee180551750029cfe735 and dae86e both name the same commit object if there is no other object in your repository whose object name starts with dae86e. *<describeOutput>*, e.g. *v1.7.4.2-679-g3bee7fb* Output from `git describe`; i.e. a closest tag, optionally followed by a dash and a number of commits, followed by a dash, a `g`, and an abbreviated object name. *<refname>*, e.g. *master*, *heads/master*, *refs/heads/master* A symbolic ref name. E.g. `master` typically means the commit object referenced by `refs/heads/master`. If you happen to have both `heads/master` and `tags/master`, you can explicitly say `heads/master` to tell Git which one you mean. When ambiguous, a `<refname>` is disambiguated by taking the first match in the following rules: 1. If `$GIT_DIR/<refname>` exists, that is what you mean (this is usually useful only for `HEAD`, `FETCH_HEAD`, `ORIG_HEAD`, `MERGE_HEAD` and `CHERRY_PICK_HEAD`); 2. otherwise, `refs/<refname>` if it exists; 3. otherwise, `refs/tags/<refname>` if it exists; 4. otherwise, `refs/heads/<refname>` if it exists; 5. otherwise, `refs/remotes/<refname>` if it exists; 6. otherwise, `refs/remotes/<refname>/HEAD` if it exists. `HEAD` names the commit on which you based the changes in the working tree. `FETCH_HEAD` records the branch which you fetched from a remote repository with your last `git fetch` invocation. `ORIG_HEAD` is created by commands that move your `HEAD` in a drastic way, to record the position of the `HEAD` before their operation, so that you can easily change the tip of the branch back to the state before you ran them. `MERGE_HEAD` records the commit(s) which you are merging into your branch when you run `git merge`. `CHERRY_PICK_HEAD` records the commit which you are cherry-picking when you run `git cherry-pick`. Note that any of the `refs/*` cases above may come either from the `$GIT_DIR/refs` directory or from the `$GIT_DIR/packed-refs` file. While the ref name encoding is unspecified, UTF-8 is preferred as some output processing may assume ref names in UTF-8. *@* `@` alone is a shortcut for `HEAD`. *[<refname>]@{<date>}*, e.g. *master@{yesterday}*, *HEAD@{5 minutes ago}* A ref followed by the suffix `@` with a date specification enclosed in a brace pair (e.g. `{yesterday}`, `{1 month 2 weeks 3 days 1 hour 1 second ago}` or `{1979-02-26 18:30:00}`) specifies the value of the ref at a prior point in time. This suffix may only be used immediately following a ref name and the ref must have an existing log (`$GIT_DIR/logs/<ref>`). Note that this looks up the state of your **local** ref at a given time; e.g., what was in your local `master` branch last week. If you want to look at commits made during certain times, see `--since` and `--until`. *<refname>@{<n>}*, e.g. *master@{1}* A ref followed by the suffix `@` with an ordinal specification enclosed in a brace pair (e.g. `{1}`, `{15}`) specifies the n-th prior value of that ref. For example `master@{1}` is the immediate prior value of `master` while `master@{5}` is the 5th prior value of `master`. This suffix may only be used immediately following a ref name and the ref must have an existing log (`$GIT_DIR/logs/<refname>`). *@{<n>}*, e.g. *@{1}* You can use the `@` construct with an empty ref part to get at a reflog entry of the current branch. For example, if you are on branch `blabla` then `@{1}` means the same as `blabla@{1}`. *@{-<n>}*, e.g. *@{-1}* The construct `@{-<n>}` means the <n>th branch/commit checked out before the current one. *[<branchname>]@{upstream}*, e.g. *master@{upstream}*, *@{u}* A branch B may be set up to build on top of a branch X (configured with `branch.<name>.merge`) at a remote R (configured with `branch.<name>.remote`). B@{u} refers to the remote-tracking branch for the branch X taken from remote R, typically found at `refs/remotes/R/X`. *[<branchname>]@{push}*, e.g. *master@{push}*, *@{push}* The suffix `@{push}` reports the branch "where we would push to" if `git push` were run while `branchname` was checked out (or the current `HEAD` if no branchname is specified). Like for `@{upstream}`, we report the remote-tracking branch that corresponds to that branch at the remote. Here’s an example to make it more clear: ``` $ git config push.default current $ git config remote.pushdefault myfork $ git switch -c mybranch origin/master $ git rev-parse --symbolic-full-name @{upstream} refs/remotes/origin/master $ git rev-parse --symbolic-full-name @{push} refs/remotes/myfork/mybranch ``` Note in the example that we set up a triangular workflow, where we pull from one location and push to another. In a non-triangular workflow, `@{push}` is the same as `@{upstream}`, and there is no need for it. This suffix is also accepted when spelled in uppercase, and means the same thing no matter the case. *<rev>^[<n>]*, e.g. *HEAD^, v1.5.1^0* A suffix `^` to a revision parameter means the first parent of that commit object. `^<n>` means the <n>th parent (i.e. `<rev>^` is equivalent to `<rev>^1`). As a special rule, `<rev>^0` means the commit itself and is used when `<rev>` is the object name of a tag object that refers to a commit object. *<rev>~[<n>]*, e.g. *HEAD~, master~3* A suffix `~` to a revision parameter means the first parent of that commit object. A suffix `~<n>` to a revision parameter means the commit object that is the <n>th generation ancestor of the named commit object, following only the first parents. I.e. `<rev>~3` is equivalent to `<rev>^^^` which is equivalent to `<rev>^1^1^1`. See below for an illustration of the usage of this form. *<rev>^{<type>}*, e.g. *v0.99.8^{commit}* A suffix `^` followed by an object type name enclosed in brace pair means dereference the object at `<rev>` recursively until an object of type `<type>` is found or the object cannot be dereferenced anymore (in which case, barf). For example, if `<rev>` is a commit-ish, `<rev>^{commit}` describes the corresponding commit object. Similarly, if `<rev>` is a tree-ish, `<rev>^{tree}` describes the corresponding tree object. `<rev>^0` is a short-hand for `<rev>^{commit}`. `<rev>^{object}` can be used to make sure `<rev>` names an object that exists, without requiring `<rev>` to be a tag, and without dereferencing `<rev>`; because a tag is already an object, it does not have to be dereferenced even once to get to an object. `<rev>^{tag}` can be used to ensure that `<rev>` identifies an existing tag object. *<rev>^{}*, e.g. *v0.99.8^{}* A suffix `^` followed by an empty brace pair means the object could be a tag, and dereference the tag recursively until a non-tag object is found. *<rev>^{/<text>}*, e.g. *HEAD^{/fix nasty bug}* A suffix `^` to a revision parameter, followed by a brace pair that contains a text led by a slash, is the same as the `:/fix nasty bug` syntax below except that it returns the youngest matching commit which is reachable from the `<rev>` before `^`. *:/<text>*, e.g. *:/fix nasty bug* A colon, followed by a slash, followed by a text, names a commit whose commit message matches the specified regular expression. This name returns the youngest matching commit which is reachable from any ref, including HEAD. The regular expression can match any part of the commit message. To match messages starting with a string, one can use e.g. `:/^foo`. The special sequence `:/!` is reserved for modifiers to what is matched. `:/!-foo` performs a negative match, while `:/!!foo` matches a literal `!` character, followed by `foo`. Any other sequence beginning with `:/!` is reserved for now. Depending on the given text, the shell’s word splitting rules might require additional quoting. *<rev>:<path>*, e.g. *HEAD:README*, *master:./README* A suffix `:` followed by a path names the blob or tree at the given path in the tree-ish object named by the part before the colon. A path starting with `./` or `../` is relative to the current working directory. The given path will be converted to be relative to the working tree’s root directory. This is most useful to address a blob or tree from a commit or tree that has the same tree structure as the working tree. *:[<n>:]<path>*, e.g. *:0:README*, *:README* A colon, optionally followed by a stage number (0 to 3) and a colon, followed by a path, names a blob object in the index at the given path. A missing stage number (and the colon that follows it) names a stage 0 entry. During a merge, stage 1 is the common ancestor, stage 2 is the target branch’s version (typically the current branch), and stage 3 is the version from the branch which is being merged. Here is an illustration, by Jon Loeliger. Both commit nodes B and C are parents of commit node A. Parent commits are ordered left-to-right. ``` G H I J \ / \ / D E F \ | / \ \ | / | \|/ | B C \ / \ / A ``` ``` A = = A^0 B = A^ = A^1 = A~1 C = = A^2 D = A^^ = A^1^1 = A~2 E = B^2 = A^^2 F = B^3 = A^^3 G = A^^^ = A^1^1^1 = A~3 H = D^2 = B^^2 = A^^^2 = A~2^2 I = F^ = B^3^ = A^^3^ J = F^2 = B^3^2 = A^^3^2 ``` Specifying ranges ----------------- History traversing commands such as `git log` operate on a set of commits, not just a single commit. For these commands, specifying a single revision, using the notation described in the previous section, means the set of commits `reachable` from the given commit. Specifying several revisions means the set of commits reachable from any of the given commits. A commit’s reachable set is the commit itself and the commits in its ancestry chain. There are several notations to specify a set of connected commits (called a "revision range"), illustrated below. ### Commit Exclusions *^<rev>* (caret) Notation To exclude commits reachable from a commit, a prefix `^` notation is used. E.g. `^r1 r2` means commits reachable from `r2` but exclude the ones reachable from `r1` (i.e. `r1` and its ancestors). ### Dotted Range Notations The *..* (two-dot) Range Notation The `^r1 r2` set operation appears so often that there is a shorthand for it. When you have two commits `r1` and `r2` (named according to the syntax explained in SPECIFYING REVISIONS above), you can ask for commits that are reachable from r2 excluding those that are reachable from r1 by `^r1 r2` and it can be written as `r1..r2`. The *...* (three-dot) Symmetric Difference Notation A similar notation `r1...r2` is called symmetric difference of `r1` and `r2` and is defined as `r1 r2 --not $(git merge-base --all r1 r2)`. It is the set of commits that are reachable from either one of `r1` (left side) or `r2` (right side) but not from both. In these two shorthand notations, you can omit one end and let it default to HEAD. For example, `origin..` is a shorthand for `origin..HEAD` and asks "What did I do since I forked from the origin branch?" Similarly, `..origin` is a shorthand for `HEAD..origin` and asks "What did the origin do since I forked from them?" Note that `..` would mean `HEAD..HEAD` which is an empty range that is both reachable and unreachable from HEAD. Commands that are specifically designed to take two distinct ranges (e.g. "git range-diff R1 R2" to compare two ranges) do exist, but they are exceptions. Unless otherwise noted, all "git" commands that operate on a set of commits work on a single revision range. In other words, writing two "two-dot range notation" next to each other, e.g. ``` $ git log A..B C..D ``` does **not** specify two revision ranges for most commands. Instead it will name a single connected set of commits, i.e. those that are reachable from either B or D but are reachable from neither A or C. In a linear history like this: ``` ---A---B---o---o---C---D ``` because A and B are reachable from C, the revision range specified by these two dotted ranges is a single commit D. ### Other <rev>^ Parent Shorthand Notations Three other shorthands exist, particularly useful for merge commits, for naming a set that is formed by a commit and its parent commits. The `r1^@` notation means all parents of `r1`. The `r1^!` notation includes commit `r1` but excludes all of its parents. By itself, this notation denotes the single commit `r1`. The `<rev>^-[<n>]` notation includes `<rev>` but excludes the <n>th parent (i.e. a shorthand for `<rev>^<n>..<rev>`), with `<n>` = 1 if not given. This is typically useful for merge commits where you can just pass `<commit>^-` to get all the commits in the branch that was merged in merge commit `<commit>` (including `<commit>` itself). While `<rev>^<n>` was about specifying a single commit parent, these three notations also consider its parents. For example you can say `HEAD^2^@`, however you cannot say `HEAD^@^2`. Revision range summary ---------------------- *<rev>* Include commits that are reachable from <rev> (i.e. <rev> and its ancestors). *^<rev>* Exclude commits that are reachable from <rev> (i.e. <rev> and its ancestors). *<rev1>..<rev2>* Include commits that are reachable from <rev2> but exclude those that are reachable from <rev1>. When either <rev1> or <rev2> is omitted, it defaults to `HEAD`. *<rev1>...<rev2>* Include commits that are reachable from either <rev1> or <rev2> but exclude those that are reachable from both. When either <rev1> or <rev2> is omitted, it defaults to `HEAD`. *<rev>^@*, e.g. *HEAD^@* A suffix `^` followed by an at sign is the same as listing all parents of `<rev>` (meaning, include anything reachable from its parents, but not the commit itself). *<rev>^!*, e.g. *HEAD^!* A suffix `^` followed by an exclamation mark is the same as giving commit `<rev>` and all its parents prefixed with `^` to exclude them (and their ancestors). *<rev>^-<n>*, e.g. *HEAD^-, HEAD^-2* Equivalent to `<rev>^<n>..<rev>`, with `<n>` = 1 if not given. Here are a handful of examples using the Loeliger illustration above, with each step in the notation’s expansion and selection carefully spelt out: ``` Args Expanded arguments Selected commits D G H D D F G H I J D F ^G D H D ^D B E I J F B ^D B C E I J F B C C I J F C B..C = ^B C C B...C = B ^F C G H D E B C B^- = B^..B = ^B^1 B E I J F B C^@ = C^1 = F I J F B^@ = B^1 B^2 B^3 = D E F D G H E F I J C^! = C ^C^@ = C ^C^1 = C ^F C B^! = B ^B^@ = B ^B^1 ^B^2 ^B^3 = B ^D ^E ^F B F^! D = F ^I ^J D G H D F ``` See also -------- [git-rev-parse[1]](git-rev-parse) git git-checkout git-checkout ============ Name ---- git-checkout - Switch branches or restore working tree files Synopsis -------- ``` git checkout [-q] [-f] [-m] [<branch>] git checkout [-q] [-f] [-m] --detach [<branch>] git checkout [-q] [-f] [-m] [--detach] <commit> git checkout [-q] [-f] [-m] [[-b|-B|--orphan] <new-branch>] [<start-point>] git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] [--] <pathspec>…​ git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] --pathspec-from-file=<file> [--pathspec-file-nul] git checkout (-p|--patch) [<tree-ish>] [--] [<pathspec>…​] ``` Description ----------- Updates files in the working tree to match the version in the index or the specified tree. If no pathspec was given, `git checkout` will also update `HEAD` to set the specified branch as the current branch. *git checkout* [<branch>] To prepare for working on `<branch>`, switch to it by updating the index and the files in the working tree, and by pointing `HEAD` at the branch. Local modifications to the files in the working tree are kept, so that they can be committed to the `<branch>`. If `<branch>` is not found but there does exist a tracking branch in exactly one remote (call it `<remote>`) with a matching name and `--no-guess` is not specified, treat as equivalent to ``` $ git checkout -b <branch> --track <remote>/<branch> ``` You could omit `<branch>`, in which case the command degenerates to "check out the current branch", which is a glorified no-op with rather expensive side-effects to show only the tracking information, if exists, for the current branch. *git checkout* -b|-B <new-branch> [<start-point>] Specifying `-b` causes a new branch to be created as if [git-branch[1]](git-branch) were called and then checked out. In this case you can use the `--track` or `--no-track` options, which will be passed to `git branch`. As a convenience, `--track` without `-b` implies branch creation; see the description of `--track` below. If `-B` is given, `<new-branch>` is created if it doesn’t exist; otherwise, it is reset. This is the transactional equivalent of ``` $ git branch -f <branch> [<start-point>] $ git checkout <branch> ``` that is to say, the branch is not reset/created unless "git checkout" is successful. *git checkout* --detach [<branch>] *git checkout* [--detach] <commit> Prepare to work on top of `<commit>`, by detaching `HEAD` at it (see "DETACHED HEAD" section), and updating the index and the files in the working tree. Local modifications to the files in the working tree are kept, so that the resulting working tree will be the state recorded in the commit plus the local modifications. When the `<commit>` argument is a branch name, the `--detach` option can be used to detach `HEAD` at the tip of the branch (`git checkout <branch>` would check out that branch without detaching `HEAD`). Omitting `<branch>` detaches `HEAD` at the tip of the current branch. *git checkout* [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] [--] <pathspec>…​ *git checkout* [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] --pathspec-from-file=<file> [--pathspec-file-nul] Overwrite the contents of the files that match the pathspec. When the `<tree-ish>` (most often a commit) is not given, overwrite working tree with the contents in the index. When the `<tree-ish>` is given, overwrite both the index and the working tree with the contents at the `<tree-ish>`. The index may contain unmerged entries because of a previous failed merge. By default, if you try to check out such an entry from the index, the checkout operation will fail and nothing will be checked out. Using `-f` will ignore these unmerged entries. The contents from a specific side of the merge can be checked out of the index by using `--ours` or `--theirs`. With `-m`, changes made to the working tree file can be discarded to re-create the original conflicted merge result. *git checkout* (-p|--patch) [<tree-ish>] [--] [<pathspec>…​] This is similar to the previous mode, but lets you use the interactive interface to show the "diff" output and choose which hunks to use in the result. See below for the description of `--patch` option. Options ------- -q --quiet Quiet, suppress feedback messages. --progress --no-progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless `--quiet` is specified. This flag enables progress reporting even if not attached to a terminal, regardless of `--quiet`. -f --force When switching branches, proceed even if the index or the working tree differs from `HEAD`, and even if there are untracked files in the way. This is used to throw away local changes and any untracked files or directories that are in the way. When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored. --ours --theirs When checking out paths from the index, check out stage #2 (`ours`) or #3 (`theirs`) for unmerged paths. Note that during `git rebase` and `git pull --rebase`, `ours` and `theirs` may appear swapped; `--ours` gives the version from the branch the changes are rebased onto, while `--theirs` gives the version from the branch that holds your work that is being rebased. This is because `rebase` is used in a workflow that treats the history at the remote as the shared canonical one, and treats the work done on the branch you are rebasing as the third-party work to be integrated, and you are temporarily assuming the role of the keeper of the canonical history during the rebase. As the keeper of the canonical history, you need to view the history from the remote as `ours` (i.e. "our shared canonical history"), while what you did on your side branch as `theirs` (i.e. "one contributor’s work on top of it"). -b <new-branch> Create a new branch named `<new-branch>` and start it at `<start-point>`; see [git-branch[1]](git-branch) for details. -B <new-branch> Creates the branch `<new-branch>` and start it at `<start-point>`; if it already exists, then reset it to `<start-point>`. This is equivalent to running "git branch" with "-f"; see [git-branch[1]](git-branch) for details. -t --track[=(direct|inherit)] When creating a new branch, set up "upstream" configuration. See "--track" in [git-branch[1]](git-branch) for details. If no `-b` option is given, the name of the new branch will be derived from the remote-tracking branch, by looking at the local part of the refspec configured for the corresponding remote, and then stripping the initial part up to the "\*". This would tell us to use `hack` as the local branch when branching off of `origin/hack` (or `remotes/origin/hack`, or even `refs/remotes/origin/hack`). If the given name has no slash, or the above guessing results in an empty name, the guessing is aborted. You can explicitly give a name with `-b` in such a case. --no-track Do not set up "upstream" configuration, even if the `branch.autoSetupMerge` configuration variable is true. --guess --no-guess If `<branch>` is not found but there does exist a tracking branch in exactly one remote (call it `<remote>`) with a matching name, treat as equivalent to ``` $ git checkout -b <branch> --track <remote>/<branch> ``` If the branch exists in multiple remotes and one of them is named by the `checkout.defaultRemote` configuration variable, we’ll use that one for the purposes of disambiguation, even if the `<branch>` isn’t unique across all remotes. Set it to e.g. `checkout.defaultRemote=origin` to always checkout remote branches from there if `<branch>` is ambiguous but exists on the `origin` remote. See also `checkout.defaultRemote` in [git-config[1]](git-config). `--guess` is the default behavior. Use `--no-guess` to disable it. The default behavior can be set via the `checkout.guess` configuration variable. -l Create the new branch’s reflog; see [git-branch[1]](git-branch) for details. -d --detach Rather than checking out a branch to work on it, check out a commit for inspection and discardable experiments. This is the default behavior of `git checkout <commit>` when `<commit>` is not a branch name. See the "DETACHED HEAD" section below for details. --orphan <new-branch> Create a new `orphan` branch, named `<new-branch>`, started from `<start-point>` and switch to it. The first commit made on this new branch will have no parents and it will be the root of a new history totally disconnected from all the other branches and commits. The index and the working tree are adjusted as if you had previously run `git checkout <start-point>`. This allows you to start a new history that records a set of paths similar to `<start-point>` by easily running `git commit -a` to make the root commit. This can be useful when you want to publish the tree from a commit without exposing its full history. You might want to do this to publish an open source branch of a project whose current tree is "clean", but whose full history contains proprietary or otherwise encumbered bits of code. If you want to start a disconnected history that records a set of paths that is totally different from the one of `<start-point>`, then you should clear the index and the working tree right after creating the orphan branch by running `git rm -rf .` from the top level of the working tree. Afterwards you will be ready to prepare your new files, repopulating the working tree, by copying them from elsewhere, extracting a tarball, etc. --ignore-skip-worktree-bits In sparse checkout mode, `git checkout -- <paths>` would update only entries matched by `<paths>` and sparse patterns in `$GIT_DIR/info/sparse-checkout`. This option ignores the sparse patterns and adds back any files in `<paths>`. -m --merge When switching branches, if you have local modifications to one or more files that are different between the current branch and the branch to which you are switching, the command refuses to switch branches in order to preserve your modifications in context. However, with this option, a three-way merge between the current branch, your working tree contents, and the new branch is done, and you will be on the new branch. When a merge conflict happens, the index entries for conflicting paths are left unmerged, and you need to resolve the conflicts and mark the resolved paths with `git add` (or `git rm` if the merge should result in deletion of the path). When checking out paths from the index, this option lets you recreate the conflicted merge in the specified paths. When switching branches with `--merge`, staged changes may be lost. --conflict=<style> The same as `--merge` option above, but changes the way the conflicting hunks are presented, overriding the `merge.conflictStyle` configuration variable. Possible values are "merge" (default), "diff3", and "zdiff3". -p --patch Interactively select hunks in the difference between the `<tree-ish>` (or the index, if unspecified) and the working tree. The chosen hunks are then applied in reverse to the working tree (and if a `<tree-ish>` was specified, the index). This means that you can use `git checkout -p` to selectively discard edits from your current working tree. See the “Interactive Mode” section of [git-add[1]](git-add) to learn how to operate the `--patch` mode. Note that this option uses the no overlay mode by default (see also `--overlay`), and currently doesn’t support overlay mode. --ignore-other-worktrees `git checkout` refuses when the wanted ref is already checked out by another worktree. This option makes it check the ref out anyway. In other words, the ref can be held by more than one worktree. --overwrite-ignore --no-overwrite-ignore Silently overwrite ignored files when switching branches. This is the default behavior. Use `--no-overwrite-ignore` to abort the operation when the new branch contains ignored files. --recurse-submodules --no-recurse-submodules Using `--recurse-submodules` will update the content of all active submodules according to the commit recorded in the superproject. If local modifications in a submodule would be overwritten the checkout will fail unless `-f` is used. If nothing (or `--no-recurse-submodules`) is used, submodules working trees will not be updated. Just like [git-submodule[1]](git-submodule), this will detach `HEAD` of the submodule. --overlay --no-overlay In the default overlay mode, `git checkout` never removes files from the index or the working tree. When specifying `--no-overlay`, files that appear in the index and working tree, but not in `<tree-ish>` are removed, to make them match `<tree-ish>` exactly. --pathspec-from-file=<file> Pathspec is passed in `<file>` instead of commandline args. If `<file>` is exactly `-` then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). See also `--pathspec-file-nul` and global `--literal-pathspecs`. --pathspec-file-nul Only meaningful with `--pathspec-from-file`. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). <branch> Branch to checkout; if it refers to a branch (i.e., a name that, when prepended with "refs/heads/", is a valid ref), then that branch is checked out. Otherwise, if it refers to a valid commit, your `HEAD` becomes "detached" and you are no longer on any branch (see below for details). You can use the `@{-N}` syntax to refer to the N-th last branch/commit checked out using "git checkout" operation. You may also specify `-` which is synonymous to `@{-1}`. As a special case, you may use `A...B` as a shortcut for the merge base of `A` and `B` if there is exactly one merge base. You can leave out at most one of `A` and `B`, in which case it defaults to `HEAD`. <new-branch> Name for the new branch. <start-point> The name of a commit at which to start the new branch; see [git-branch[1]](git-branch) for details. Defaults to `HEAD`. As a special case, you may use `"A...B"` as a shortcut for the merge base of `A` and `B` if there is exactly one merge base. You can leave out at most one of `A` and `B`, in which case it defaults to `HEAD`. <tree-ish> Tree to checkout from (when paths are given). If not specified, the index will be used. As a special case, you may use `"A...B"` as a shortcut for the merge base of `A` and `B` if there is exactly one merge base. You can leave out at most one of `A` and `B`, in which case it defaults to `HEAD`. -- Do not interpret any more arguments as options. <pathspec>…​ Limits the paths affected by the operation. For more details, see the `pathspec` entry in [gitglossary[7]](gitglossary). Detached head ------------- `HEAD` normally refers to a named branch (e.g. `master`). Meanwhile, each branch refers to a specific commit. Let’s look at a repo with three commits, one of them tagged, and with branch `master` checked out: ``` HEAD (refers to branch 'master') | v a---b---c branch 'master' (refers to commit 'c') ^ | tag 'v2.0' (refers to commit 'b') ``` When a commit is created in this state, the branch is updated to refer to the new commit. Specifically, `git commit` creates a new commit `d`, whose parent is commit `c`, and then updates branch `master` to refer to new commit `d`. `HEAD` still refers to branch `master` and so indirectly now refers to commit `d`: ``` $ edit; git add; git commit HEAD (refers to branch 'master') | v a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') ``` It is sometimes useful to be able to checkout a commit that is not at the tip of any named branch, or even to create a new commit that is not referenced by a named branch. Let’s look at what happens when we checkout commit `b` (here we show two ways this may be done): ``` $ git checkout v2.0 # or $ git checkout master^^ HEAD (refers to commit 'b') | v a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') ``` Notice that regardless of which checkout command we use, `HEAD` now refers directly to commit `b`. This is known as being in detached `HEAD` state. It means simply that `HEAD` refers to a specific commit, as opposed to referring to a named branch. Let’s see what happens when we create a commit: ``` $ edit; git add; git commit HEAD (refers to commit 'e') | v e / a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') ``` There is now a new commit `e`, but it is referenced only by `HEAD`. We can of course add yet another commit in this state: ``` $ edit; git add; git commit HEAD (refers to commit 'f') | v e---f / a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') ``` In fact, we can perform all the normal Git operations. But, let’s look at what happens when we then checkout `master`: ``` $ git checkout master HEAD (refers to branch 'master') e---f | / v a---b---c---d branch 'master' (refers to commit 'd') ^ | tag 'v2.0' (refers to commit 'b') ``` It is important to realize that at this point nothing refers to commit `f`. Eventually commit `f` (and by extension commit `e`) will be deleted by the routine Git garbage collection process, unless we create a reference before that happens. If we have not yet moved away from commit `f`, any of these will create a reference to it: ``` $ git checkout -b foo (1) $ git branch foo (2) $ git tag foo (3) ``` 1. creates a new branch `foo`, which refers to commit `f`, and then updates `HEAD` to refer to branch `foo`. In other words, we’ll no longer be in detached `HEAD` state after this command. 2. similarly creates a new branch `foo`, which refers to commit `f`, but leaves `HEAD` detached. 3. creates a new tag `foo`, which refers to commit `f`, leaving `HEAD` detached. If we have moved away from commit `f`, then we must first recover its object name (typically by using git reflog), and then we can create a reference to it. For example, to see the last two commits to which `HEAD` referred, we can use either of these commands: ``` $ git reflog -2 HEAD # or $ git log -g -2 HEAD ``` Argument disambiguation ----------------------- When there is only one argument given and it is not `--` (e.g. `git checkout abc`), and when the argument is both a valid `<tree-ish>` (e.g. a branch `abc` exists) and a valid `<pathspec>` (e.g. a file or a directory whose name is "abc" exists), Git would usually ask you to disambiguate. Because checking out a branch is so common an operation, however, `git checkout abc` takes "abc" as a `<tree-ish>` in such a situation. Use `git checkout -- <pathspec>` if you want to checkout these paths out of the index. Examples -------- 1. The following sequence checks out the `master` branch, reverts the `Makefile` to two revisions back, deletes `hello.c` by mistake, and gets it back from the index. ``` $ git checkout master (1) $ git checkout master~2 Makefile (2) $ rm -f hello.c $ git checkout hello.c (3) ``` 1. switch branch 2. take a file out of another commit 3. restore `hello.c` from the index If you want to check out `all` C source files out of the index, you can say ``` $ git checkout -- '*.c' ``` Note the quotes around `*.c`. The file `hello.c` will also be checked out, even though it is no longer in the working tree, because the file globbing is used to match entries in the index (not in the working tree by the shell). If you have an unfortunate branch that is named `hello.c`, this step would be confused as an instruction to switch to that branch. You should instead write: ``` $ git checkout -- hello.c ``` 2. After working in the wrong branch, switching to the correct branch would be done using: ``` $ git checkout mytopic ``` However, your "wrong" branch and correct `mytopic` branch may differ in files that you have modified locally, in which case the above checkout would fail like this: ``` $ git checkout mytopic error: You have local changes to 'frotz'; not switching branches. ``` You can give the `-m` flag to the command, which would try a three-way merge: ``` $ git checkout -m mytopic Auto-merging frotz ``` After this three-way merge, the local modifications are `not` registered in your index file, so `git diff` would show you what changes you made since the tip of the new branch. 3. When a merge conflict happens during switching branches with the `-m` option, you would see something like this: ``` $ git checkout -m mytopic Auto-merging frotz ERROR: Merge conflict in frotz fatal: merge program failed ``` At this point, `git diff` shows the changes cleanly merged as in the previous example, as well as the changes in the conflicted files. Edit and resolve the conflict and mark it resolved with `git add` as usual: ``` $ edit frotz $ git add frotz ``` Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: checkout.defaultRemote When you run `git checkout <something>` or `git switch <something>` and only have one remote, it may implicitly fall back on checking out and tracking e.g. `origin/<something>`. This stops working as soon as you have more than one remote with a `<something>` reference. This setting allows for setting the name of a preferred remote that should always win when it comes to disambiguation. The typical use-case is to set this to `origin`. Currently this is used by [git-switch[1]](git-switch) and [git-checkout[1]](git-checkout) when `git checkout <something>` or `git switch <something>` will checkout the `<something>` branch on another remote, and by [git-worktree[1]](git-worktree) when `git worktree add` refers to a remote branch. This setting might be used for other checkout-like commands or functionality in the future. checkout.guess Provides the default value for the `--guess` or `--no-guess` option in `git checkout` and `git switch`. See [git-switch[1]](git-switch) and [git-checkout[1]](git-checkout). checkout.workers The number of parallel workers to use when updating the working tree. The default is one, i.e. sequential execution. If set to a value less than one, Git will use as many workers as the number of logical cores available. This setting and `checkout.thresholdForParallelism` affect all commands that perform checkout. E.g. checkout, clone, reset, sparse-checkout, etc. Note: parallel checkout usually delivers better performance for repositories located on SSDs or over NFS. For repositories on spinning disks and/or machines with a small number of cores, the default sequential checkout often performs better. The size and compression level of a repository might also influence how well the parallel version performs. checkout.thresholdForParallelism When running parallel checkout with a small number of files, the cost of subprocess spawning and inter-process communication might outweigh the parallelization gains. This setting allows to define the minimum number of files for which parallel checkout should be attempted. The default is 100. See also -------- [git-switch[1]](git-switch), [git-restore[1]](git-restore)
programming_docs
git git-column git-column ========== Name ---- git-column - Display data in columns Synopsis -------- ``` git column [--command=<name>] [--[raw-]mode=<mode>] [--width=<width>] [--indent=<string>] [--nl=<string>] [--padding=<n>] ``` Description ----------- This command formats the lines of its standard input into a table with multiple columns. Each input line occupies one cell of the table. It is used internally by other git commands to format output into columns. Options ------- --command=<name> Look up layout mode using configuration variable column.<name> and column.ui. --mode=<mode> Specify layout mode. See configuration variable column.ui for option syntax in [git-config[1]](git-config). --raw-mode=<n> Same as --mode but take mode encoded as a number. This is mainly used by other commands that have already parsed layout mode. --width=<width> Specify the terminal width. By default `git column` will detect the terminal width, or fall back to 80 if it is unable to do so. --indent=<string> String to be printed at the beginning of each line. --nl=<string> String to be printed at the end of each line, including newline character. --padding=<N> The number of spaces between columns. One space by default. Examples -------- Format data by columns: ``` $ seq 1 24 | git column --mode=column --padding=5 1 4 7 10 13 16 19 22 2 5 8 11 14 17 20 23 3 6 9 12 15 18 21 24 ``` Format data by rows: ``` $ seq 1 21 | git column --mode=row --padding=5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ``` List some tags in a table with unequal column widths: ``` $ git tag --list 'v2.4.*' --column=row,dense v2.4.0 v2.4.0-rc0 v2.4.0-rc1 v2.4.0-rc2 v2.4.0-rc3 v2.4.1 v2.4.10 v2.4.11 v2.4.12 v2.4.2 v2.4.3 v2.4.4 v2.4.5 v2.4.6 v2.4.7 v2.4.8 v2.4.9 ``` Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: column.ui Specify whether supported commands should output in columns. This variable consists of a list of tokens separated by spaces or commas: These options control when the feature should be enabled (defaults to `never`): `always` always show in columns `never` never show in columns `auto` show in columns if the output is to the terminal These options control layout (defaults to `column`). Setting any of these implies `always` if none of `always`, `never`, or `auto` are specified. `column` fill columns before rows `row` fill rows before columns `plain` show in one column Finally, these options can be combined with a layout option (defaults to `nodense`): `dense` make unequal size columns to utilize more space `nodense` make equal size columns column.branch Specify whether to output branch listing in `git branch` in columns. See `column.ui` for details. column.clean Specify the layout when list items in `git clean -i`, which always shows files and directories in columns. See `column.ui` for details. column.status Specify whether to output untracked files in `git status` in columns. See `column.ui` for details. column.tag Specify whether to output tag listing in `git tag` in columns. See `column.ui` for details. git git-branch git-branch ========== Name ---- git-branch - List, create, or delete branches Synopsis -------- ``` git branch [--color[=<when>] | --no-color] [--show-current] [-v [--abbrev=<n> | --no-abbrev]] [--column[=<options>] | --no-column] [--sort=<key>] [--merged [<commit>]] [--no-merged [<commit>]] [--contains [<commit>]] [--no-contains [<commit>]] [--points-at <object>] [--format=<format>] [(-r | --remotes) | (-a | --all)] [--list] [<pattern>…​] git branch [--track[=(direct|inherit)] | --no-track] [-f] [--recurse-submodules] <branchname> [<start-point>] git branch (--set-upstream-to=<upstream> | -u <upstream>) [<branchname>] git branch --unset-upstream [<branchname>] git branch (-m | -M) [<oldbranch>] <newbranch> git branch (-c | -C) [<oldbranch>] <newbranch> git branch (-d | -D) [-r] <branchname>…​ git branch --edit-description [<branchname>] ``` Description ----------- If `--list` is given, or if there are no non-option arguments, existing branches are listed; the current branch will be highlighted in green and marked with an asterisk. Any branches checked out in linked worktrees will be highlighted in cyan and marked with a plus sign. Option `-r` causes the remote-tracking branches to be listed, and option `-a` shows both local and remote branches. If a `<pattern>` is given, it is used as a shell wildcard to restrict the output to matching branches. If multiple patterns are given, a branch is shown if it matches any of the patterns. Note that when providing a `<pattern>`, you must use `--list`; otherwise the command may be interpreted as branch creation. With `--contains`, shows only the branches that contain the named commit (in other words, the branches whose tip commits are descendants of the named commit), `--no-contains` inverts it. With `--merged`, only branches merged into the named commit (i.e. the branches whose tip commits are reachable from the named commit) will be listed. With `--no-merged` only branches not merged into the named commit will be listed. If the <commit> argument is missing it defaults to `HEAD` (i.e. the tip of the current branch). The command’s second form creates a new branch head named <branchname> which points to the current `HEAD`, or <start-point> if given. As a special case, for <start-point>, you may use `"A...B"` as a shortcut for the merge base of `A` and `B` if there is exactly one merge base. You can leave out at most one of `A` and `B`, in which case it defaults to `HEAD`. Note that this will create the new branch, but it will not switch the working tree to it; use "git switch <newbranch>" to switch to the new branch. When a local branch is started off a remote-tracking branch, Git sets up the branch (specifically the `branch.<name>.remote` and `branch.<name>.merge` configuration entries) so that `git pull` will appropriately merge from the remote-tracking branch. This behavior may be changed via the global `branch.autoSetupMerge` configuration flag. That setting can be overridden by using the `--track` and `--no-track` options, and changed later using `git branch --set-upstream-to`. With a `-m` or `-M` option, <oldbranch> will be renamed to <newbranch>. If <oldbranch> had a corresponding reflog, it is renamed to match <newbranch>, and a reflog entry is created to remember the branch renaming. If <newbranch> exists, -M must be used to force the rename to happen. The `-c` and `-C` options have the exact same semantics as `-m` and `-M`, except instead of the branch being renamed, it will be copied to a new name, along with its config and reflog. With a `-d` or `-D` option, `<branchname>` will be deleted. You may specify more than one branch for deletion. If the branch currently has a reflog then the reflog will also be deleted. Use `-r` together with `-d` to delete remote-tracking branches. Note, that it only makes sense to delete remote-tracking branches if they no longer exist in the remote repository or if `git fetch` was configured not to fetch them again. See also the `prune` subcommand of [git-remote[1]](git-remote) for a way to clean up all obsolete remote-tracking branches. Options ------- -d --delete Delete a branch. The branch must be fully merged in its upstream branch, or in `HEAD` if no upstream was set with `--track` or `--set-upstream-to`. -D Shortcut for `--delete --force`. --create-reflog Create the branch’s reflog. This activates recording of all changes made to the branch ref, enabling use of date based sha1 expressions such as "<branchname>@{yesterday}". Note that in non-bare repositories, reflogs are usually enabled by default by the `core.logAllRefUpdates` config option. The negated form `--no-create-reflog` only overrides an earlier `--create-reflog`, but currently does not negate the setting of `core.logAllRefUpdates`. -f --force Reset <branchname> to <startpoint>, even if <branchname> exists already. Without `-f`, `git branch` refuses to change an existing branch. In combination with `-d` (or `--delete`), allow deleting the branch irrespective of its merged status, or whether it even points to a valid commit. In combination with `-m` (or `--move`), allow renaming the branch even if the new branch name already exists, the same applies for `-c` (or `--copy`). -m --move Move/rename a branch, together with its config and reflog. -M Shortcut for `--move --force`. -c --copy Copy a branch, together with its config and reflog. -C Shortcut for `--copy --force`. --color[=<when>] Color branches to highlight current, local, and remote-tracking branches. The value must be always (the default), never, or auto. --no-color Turn off branch colors, even when the configuration file gives the default to color output. Same as `--color=never`. -i --ignore-case Sorting and filtering branches are case insensitive. --column[=<options>] --no-column Display branch listing in columns. See configuration variable `column.branch` for option syntax. `--column` and `--no-column` without options are equivalent to `always` and `never` respectively. This option is only applicable in non-verbose mode. -r --remotes List or delete (if used with -d) the remote-tracking branches. Combine with `--list` to match the optional pattern(s). -a --all List both remote-tracking branches and local branches. Combine with `--list` to match optional pattern(s). -l --list List branches. With optional `<pattern>...`, e.g. `git branch --list 'maint-*'`, list only the branches that match the pattern(s). --show-current Print the name of the current branch. In detached HEAD state, nothing is printed. -v -vv --verbose When in list mode, show sha1 and commit subject line for each head, along with relationship to upstream branch (if any). If given twice, print the path of the linked worktree (if any) and the name of the upstream branch, as well (see also `git remote show <remote>`). Note that the current worktree’s HEAD will not have its path printed (it will always be your current directory). -q --quiet Be more quiet when creating or deleting a branch, suppressing non-error messages. --abbrev=<n> In the verbose listing that show the commit object name, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. The default value is 7 and can be overridden by the `core.abbrev` config option. --no-abbrev Display the full sha1s in the output listing rather than abbreviating them. -t --track[=(direct|inherit)] When creating a new branch, set up `branch.<name>.remote` and `branch.<name>.merge` configuration entries to set "upstream" tracking configuration for the new branch. This configuration will tell git to show the relationship between the two branches in `git status` and `git branch -v`. Furthermore, it directs `git pull` without arguments to pull from the upstream when the new branch is checked out. The exact upstream branch is chosen depending on the optional argument: `-t`, `--track`, or `--track=direct` means to use the start-point branch itself as the upstream; `--track=inherit` means to copy the upstream configuration of the start-point branch. The branch.autoSetupMerge configuration variable specifies how `git switch`, `git checkout` and `git branch` should behave when neither `--track` nor `--no-track` are specified: The default option, `true`, behaves as though `--track=direct` were given whenever the start-point is a remote-tracking branch. `false` behaves as if `--no-track` were given. `always` behaves as though `--track=direct` were given. `inherit` behaves as though `--track=inherit` were given. `simple` behaves as though `--track=direct` were given only when the start-point is a remote-tracking branch and the new branch has the same name as the remote branch. See [git-pull[1]](git-pull) and [git-config[1]](git-config) for additional discussion on how the `branch.<name>.remote` and `branch.<name>.merge` options are used. --no-track Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is set. --recurse-submodules THIS OPTION IS EXPERIMENTAL! Causes the current command to recurse into submodules if `submodule.propagateBranches` is enabled. See `submodule.propagateBranches` in [git-config[1]](git-config). Currently, only branch creation is supported. When used in branch creation, a new branch <branchname> will be created in the superproject and all of the submodules in the superproject’s <start-point>. In submodules, the branch will point to the submodule commit in the superproject’s <start-point> but the branch’s tracking information will be set up based on the submodule’s branches and remotes e.g. `git branch --recurse-submodules topic origin/main` will create the submodule branch "topic" that points to the submodule commit in the superproject’s "origin/main", but tracks the submodule’s "origin/main". --set-upstream As this option had confusing syntax, it is no longer supported. Please use `--track` or `--set-upstream-to` instead. -u <upstream> --set-upstream-to=<upstream> Set up <branchname>'s tracking information so <upstream> is considered <branchname>'s upstream branch. If no <branchname> is specified, then it defaults to the current branch. --unset-upstream Remove the upstream information for <branchname>. If no branch is specified it defaults to the current branch. --edit-description Open an editor and edit the text to explain what the branch is for, to be used by various other commands (e.g. `format-patch`, `request-pull`, and `merge` (if enabled)). Multi-line explanations may be used. --contains [<commit>] Only list branches which contain the specified commit (HEAD if not specified). Implies `--list`. --no-contains [<commit>] Only list branches which don’t contain the specified commit (HEAD if not specified). Implies `--list`. --merged [<commit>] Only list branches whose tips are reachable from the specified commit (HEAD if not specified). Implies `--list`. --no-merged [<commit>] Only list branches whose tips are not reachable from the specified commit (HEAD if not specified). Implies `--list`. <branchname> The name of the branch to create or delete. The new branch name must pass all checks defined by [git-check-ref-format[1]](git-check-ref-format). Some of these checks may restrict the characters allowed in a branch name. <start-point> The new branch head will point to this commit. It may be given as a branch name, a commit-id, or a tag. If this option is omitted, the current HEAD will be used instead. <oldbranch> The name of an existing branch to rename. <newbranch> The new name for an existing branch. The same restrictions as for <branchname> apply. --sort=<key> Sort based on the key given. Prefix `-` to sort in descending order of the value. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key. The keys supported are the same as those in `git for-each-ref`. Sort order defaults to the value configured for the `branch.sort` variable if exists, or to sorting based on the full refname (including `refs/...` prefix). This lists detached HEAD (if present) first, then local branches and finally remote-tracking branches. See [git-config[1]](git-config). --points-at <object> Only list branches of the given object. --format <format> A string that interpolates `%(fieldname)` from a branch ref being shown and the object it points at. The format is the same as that of [git-for-each-ref[1]](git-for-each-ref). Configuration ------------- `pager.branch` is only respected when listing branches, i.e., when `--list` is used or implied. The default is to use a pager. See [git-config[1]](git-config). Everything above this line in this section isn’t included from the [git-config[1]](git-config) documentation. The content that follows is the same as what’s found there: branch.autoSetupMerge Tells `git branch`, `git switch` and `git checkout` to set up new branches so that [git-pull[1]](git-pull) will appropriately merge from the starting point branch. Note that even if this option is not set, this behavior can be chosen per-branch using the `--track` and `--no-track` options. The valid settings are: `false` — no automatic setup is done; `true` — automatic setup is done when the starting point is a remote-tracking branch; `always` — automatic setup is done when the starting point is either a local branch or remote-tracking branch; `inherit` — if the starting point has a tracking configuration, it is copied to the new branch; `simple` — automatic setup is done only when the starting point is a remote-tracking branch and the new branch has the same name as the remote branch. This option defaults to true. branch.autoSetupRebase When a new branch is created with `git branch`, `git switch` or `git checkout` that tracks another branch, this variable tells Git to set up pull to rebase instead of merge (see "branch.<name>.rebase"). When `never`, rebase is never automatically set to true. When `local`, rebase is set to true for tracked branches of other local branches. When `remote`, rebase is set to true for tracked branches of remote-tracking branches. When `always`, rebase will be set to true for all tracking branches. See "branch.autoSetupMerge" for details on how to set up a branch to track another branch. This option defaults to never. branch.sort This variable controls the sort ordering of branches when displayed by [git-branch[1]](git-branch). Without the "--sort=<value>" option provided, the value of this variable will be used as the default. See [git-for-each-ref[1]](git-for-each-ref) field names for valid values. branch.<name>.remote When on branch <name>, it tells `git fetch` and `git push` which remote to fetch from/push to. The remote to push to may be overridden with `remote.pushDefault` (for all branches). The remote to push to, for the current branch, may be further overridden by `branch.<name>.pushRemote`. If no remote is configured, or if you are not on any branch and there is more than one remote defined in the repository, it defaults to `origin` for fetching and `remote.pushDefault` for pushing. Additionally, `.` (a period) is the current local repository (a dot-repository), see `branch.<name>.merge`'s final note below. branch.<name>.pushRemote When on branch <name>, it overrides `branch.<name>.remote` for pushing. It also overrides `remote.pushDefault` for pushing from branch <name>. When you pull from one place (e.g. your upstream) and push to another place (e.g. your own publishing repository), you would want to set `remote.pushDefault` to specify the remote to push to for all branches, and use this option to override it for a specific branch. branch.<name>.merge Defines, together with branch.<name>.remote, the upstream branch for the given branch. It tells `git fetch`/`git pull`/`git rebase` which branch to merge and can also affect `git push` (see push.default). When in branch <name>, it tells `git fetch` the default refspec to be marked for merging in FETCH\_HEAD. The value is handled like the remote part of a refspec, and must match a ref which is fetched from the remote given by "branch.<name>.remote". The merge information is used by `git pull` (which at first calls `git fetch`) to lookup the default branch for merging. Without this option, `git pull` defaults to merge the first refspec fetched. Specify multiple values to get an octopus merge. If you wish to setup `git pull` so that it merges into <name> from another branch in the local repository, you can point branch.<name>.merge to the desired branch, and use the relative path setting `.` (a period) for branch.<name>.remote. branch.<name>.mergeOptions Sets default options for merging into branch <name>. The syntax and supported options are the same as those of [git-merge[1]](git-merge), but option values containing whitespace characters are currently not supported. branch.<name>.rebase When true, rebase the branch <name> on top of the fetched branch, instead of merging the default branch from the default remote when "git pull" is run. See "pull.rebase" for doing this in a non branch-specific manner. When `merges` (or just `m`), pass the `--rebase-merges` option to `git rebase` so that the local merge commits are included in the rebase (see [git-rebase[1]](git-rebase) for details). When the value is `interactive` (or just `i`), the rebase is run in interactive mode. **NOTE**: this is a possibly dangerous operation; do **not** use it unless you understand the implications (see [git-rebase[1]](git-rebase) for details). branch.<name>.description Branch description, can be edited with `git branch --edit-description`. Branch description is automatically added in the format-patch cover letter or request-pull summary. Examples -------- Start development from a known tag ``` $ git clone git://git.kernel.org/pub/scm/.../linux-2.6 my2.6 $ cd my2.6 $ git branch my2.6.14 v2.6.14 (1) $ git switch my2.6.14 ``` 1. This step and the next one could be combined into a single step with "checkout -b my2.6.14 v2.6.14". Delete an unneeded branch ``` $ git clone git://git.kernel.org/.../git.git my.git $ cd my.git $ git branch -d -r origin/todo origin/html origin/man (1) $ git branch -D test (2) ``` 1. Delete the remote-tracking branches "todo", "html" and "man". The next `fetch` or `pull` will create them again unless you configure them not to. See [git-fetch[1]](git-fetch). 2. Delete the "test" branch even if the "master" branch (or whichever branch is currently checked out) does not have all commits from the test branch. Listing branches from a specific remote ``` $ git branch -r -l '<remote>/<pattern>' (1) $ git for-each-ref 'refs/remotes/<remote>/<pattern>' (2) ``` 1. Using `-a` would conflate <remote> with any local branches you happen to have been prefixed with the same <remote> pattern. 2. `for-each-ref` can take a wide range of options. See [git-for-each-ref[1]](git-for-each-ref) Patterns will normally need quoting. Notes ----- If you are creating a branch that you want to switch to immediately, it is easier to use the "git switch" command with its `-c` option to do the same thing with a single command. The options `--contains`, `--no-contains`, `--merged` and `--no-merged` serve four related but different purposes: * `--contains <commit>` is used to find all branches which will need special attention if <commit> were to be rebased or amended, since those branches contain the specified <commit>. * `--no-contains <commit>` is the inverse of that, i.e. branches that don’t contain the specified <commit>. * `--merged` is used to find all branches which can be safely deleted, since those branches are fully contained by HEAD. * `--no-merged` is used to find branches which are candidates for merging into HEAD, since those branches are not fully contained by HEAD. When combining multiple `--contains` and `--no-contains` filters, only references that contain at least one of the `--contains` commits and contain none of the `--no-contains` commits are shown. When combining multiple `--merged` and `--no-merged` filters, only references that are reachable from at least one of the `--merged` commits and from none of the `--no-merged` commits are shown. See also -------- [git-check-ref-format[1]](git-check-ref-format), [git-fetch[1]](git-fetch), [git-remote[1]](git-remote), [“Understanding history: What is a branch?”](user-manual#what-is-a-branch) in the Git User’s Manual.
programming_docs
git gitcredentials gitcredentials ============== Name ---- gitcredentials - Providing usernames and passwords to Git Synopsis -------- ``` git config credential.https://example.com.username myusername git config credential.helper "$helper $options" ``` Description ----------- Git will sometimes need credentials from the user in order to perform operations; for example, it may need to ask for a username and password in order to access a remote repository over HTTP. Some remotes accept a personal access token or OAuth access token as a password. This manual describes the mechanisms Git uses to request these credentials, as well as some features to avoid inputting these credentials repeatedly. Requesting credentials ---------------------- Without any credential helpers defined, Git will try the following strategies to ask the user for usernames and passwords: 1. If the `GIT_ASKPASS` environment variable is set, the program specified by the variable is invoked. A suitable prompt is provided to the program on the command line, and the user’s input is read from its standard output. 2. Otherwise, if the `core.askPass` configuration variable is set, its value is used as above. 3. Otherwise, if the `SSH_ASKPASS` environment variable is set, its value is used as above. 4. Otherwise, the user is prompted on the terminal. Avoiding repetition ------------------- It can be cumbersome to input the same credentials over and over. Git provides two methods to reduce this annoyance: 1. Static configuration of usernames for a given authentication context. 2. Credential helpers to cache or store passwords, or to interact with a system password wallet or keychain. The first is simple and appropriate if you do not have secure storage available for a password. It is generally configured by adding this to your config: ``` [credential "https://example.com"] username = me ``` Credential helpers, on the other hand, are external programs from which Git can request both usernames and passwords; they typically interface with secure storage provided by the OS or other programs. Alternatively, a credential-generating helper might generate credentials for certain servers via some API. To use a helper, you must first select one to use. Git currently includes the following helpers: cache Cache credentials in memory for a short period of time. See [git-credential-cache[1]](git-credential-cache) for details. store Store credentials indefinitely on disk. See [git-credential-store[1]](git-credential-store) for details. You may also have third-party helpers installed; search for `credential-*` in the output of `git help -a`, and consult the documentation of individual helpers. Once you have selected a helper, you can tell Git to use it by putting its name into the credential.helper variable. 1. Find a helper. ``` $ git help -a | grep credential- credential-foo ``` 2. Read its description. ``` $ git help credential-foo ``` 3. Tell Git to use it. ``` $ git config --global credential.helper foo ``` Credential contexts ------------------- Git considers each credential to have a context defined by a URL. This context is used to look up context-specific configuration, and is passed to any helpers, which may use it as an index into secure storage. For instance, imagine we are accessing `https://example.com/foo.git`. When Git looks into a config file to see if a section matches this context, it will consider the two a match if the context is a more-specific subset of the pattern in the config file. For example, if you have this in your config file: ``` [credential "https://example.com"] username = foo ``` then we will match: both protocols are the same, both hosts are the same, and the "pattern" URL does not care about the path component at all. However, this context would not match: ``` [credential "https://kernel.org"] username = foo ``` because the hostnames differ. Nor would it match `foo.example.com`; Git compares hostnames exactly, without considering whether two hosts are part of the same domain. Likewise, a config entry for `http://example.com` would not match: Git compares the protocols exactly. However, you may use wildcards in the domain name and other pattern matching techniques as with the `http.<URL>.*` options. If the "pattern" URL does include a path component, then this too must match exactly: the context `https://example.com/bar/baz.git` will match a config entry for `https://example.com/bar/baz.git` (in addition to matching the config entry for `https://example.com`) but will not match a config entry for `https://example.com/bar`. Configuration options --------------------- Options for a credential context can be configured either in `credential.*` (which applies to all credentials), or `credential.<URL>.*`, where <URL> matches the context as described above. The following options are available in either location: helper The name of an external credential helper, and any associated options. If the helper name is not an absolute path, then the string `git credential-` is prepended. The resulting string is executed by the shell (so, for example, setting this to `foo --option=bar` will execute `git credential-foo --option=bar` via the shell. See the manual of specific helpers for examples of their use. If there are multiple instances of the `credential.helper` configuration variable, each helper will be tried in turn, and may provide a username, password, or nothing. Once Git has acquired both a username and a password, no more helpers will be tried. If `credential.helper` is configured to the empty string, this resets the helper list to empty (so you may override a helper set by a lower-priority config file by configuring the empty-string helper, followed by whatever set of helpers you would like). username A default username, if one is not provided in the URL. useHttpPath By default, Git does not consider the "path" component of an http URL to be worth matching via external helpers. This means that a credential stored for `https://example.com/foo.git` will also be used for `https://example.com/bar.git`. If you do want to distinguish these cases, set this option to `true`. Custom helpers -------------- You can write your own custom helpers to interface with any system in which you keep credentials. Credential helpers are programs executed by Git to fetch or save credentials from and to long-term storage (where "long-term" is simply longer than a single Git process; e.g., credentials may be stored in-memory for a few minutes, or indefinitely on disk). Each helper is specified by a single string in the configuration variable `credential.helper` (and others, see [git-config[1]](git-config)). The string is transformed by Git into a command to be executed using these rules: 1. If the helper string begins with "!", it is considered a shell snippet, and everything after the "!" becomes the command. 2. Otherwise, if the helper string begins with an absolute path, the verbatim helper string becomes the command. 3. Otherwise, the string "git credential-" is prepended to the helper string, and the result becomes the command. The resulting command then has an "operation" argument appended to it (see below for details), and the result is executed by the shell. Here are some example specifications: ``` # run "git credential-foo" [credential] helper = foo # same as above, but pass an argument to the helper [credential] helper = "foo --bar=baz" # the arguments are parsed by the shell, so use shell # quoting if necessary [credential] helper = "foo --bar='whitespace arg'" # you can also use an absolute path, which will not use the git wrapper [credential] helper = "/path/to/my/helper --with-arguments" # or you can specify your own shell snippet [credential "https://example.com"] username = your_user helper = "!f() { test \"$1\" = get && echo \"password=$(cat $HOME/.secret)\"; }; f" ``` Generally speaking, rule (3) above is the simplest for users to specify. Authors of credential helpers should make an effort to assist their users by naming their program "git-credential-$NAME", and putting it in the `$PATH` or `$GIT_EXEC_PATH` during installation, which will allow a user to enable it with `git config credential.helper $NAME`. When a helper is executed, it will have one "operation" argument appended to its command line, which is one of: `get` Return a matching credential, if any exists. `store` Store the credential, if applicable to the helper. `erase` Remove a matching credential, if any, from the helper’s storage. The details of the credential will be provided on the helper’s stdin stream. The exact format is the same as the input/output format of the `git credential` plumbing command (see the section `INPUT/OUTPUT FORMAT` in [git-credential[1]](git-credential) for a detailed specification). For a `get` operation, the helper should produce a list of attributes on stdout in the same format (see [git-credential[1]](git-credential) for common attributes). A helper is free to produce a subset, or even no values at all if it has nothing useful to provide. Any provided attributes will overwrite those already known about by Git’s credential subsystem. Unrecognised attributes are silently discarded. While it is possible to override all attributes, well behaving helpers should refrain from doing so for any attribute other than username and password. If a helper outputs a `quit` attribute with a value of `true` or `1`, no further helpers will be consulted, nor will the user be prompted (if no credential has been provided, the operation will then fail). Similarly, no more helpers will be consulted once both username and password had been provided. For a `store` or `erase` operation, the helper’s output is ignored. If a helper fails to perform the requested operation or needs to notify the user of a potential issue, it may write to stderr. If it does not support the requested operation (e.g., a read-only store or generator), it should silently ignore the request. If a helper receives any other operation, it should silently ignore the request. This leaves room for future operations to be added (older helpers will just ignore the new requests). git git-ls-tree git-ls-tree =========== Name ---- git-ls-tree - List the contents of a tree object Synopsis -------- ``` git ls-tree [-d] [-r] [-t] [-l] [-z] [--name-only] [--name-status] [--object-only] [--full-name] [--full-tree] [--abbrev[=<n>]] [--format=<format>] <tree-ish> [<path>…​] ``` Description ----------- Lists the contents of a given tree object, like what "/bin/ls -a" does in the current working directory. Note that: * the behaviour is slightly different from that of "/bin/ls" in that the `<path>` denotes just a list of patterns to match, e.g. so specifying directory name (without `-r`) will behave differently, and order of the arguments does not matter. * the behaviour is similar to that of "/bin/ls" in that the `<path>` is taken as relative to the current working directory. E.g. when you are in a directory `sub` that has a directory `dir`, you can run `git ls-tree -r HEAD dir` to list the contents of the tree (that is `sub/dir` in `HEAD`). You don’t want to give a tree that is not at the root level (e.g. `git ls-tree -r HEAD:sub dir`) in this case, as that would result in asking for `sub/sub/dir` in the `HEAD` commit. However, the current working directory can be ignored by passing --full-tree option. Options ------- <tree-ish> Id of a tree-ish. -d Show only the named tree entry itself, not its children. -r Recurse into sub-trees. -t Show tree entries even when going to recurse them. Has no effect if `-r` was not passed. `-d` implies `-t`. -l --long Show object size of blob (file) entries. -z \0 line termination on output and do not quote filenames. See OUTPUT FORMAT below for more information. --name-only --name-status List only filenames (instead of the "long" output), one per line. Cannot be combined with `--object-only`. --object-only List only names of the objects, one per line. Cannot be combined with `--name-only` or `--name-status`. This is equivalent to specifying `--format='%(objectname)'`, but for both this option and that exact format the command takes a hand-optimized codepath instead of going through the generic formatting mechanism. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. Non default number of digits can be specified with --abbrev=<n>. --full-name Instead of showing the path names relative to the current working directory, show the full path names. --full-tree Do not limit the listing to the current working directory. Implies --full-name. --format=<format> A string that interpolates `%(fieldname)` from the result being shown. It also interpolates `%%` to `%`, and `%xx` where `xx` are hex digits interpolates to character with hex code `xx`; for example `%00` interpolates to `\0` (NUL), `%09` to `\t` (TAB) and `%0a` to `\n` (LF). When specified, `--format` cannot be combined with other format-altering options, including `--long`, `--name-only` and `--object-only`. [<path>…​] When paths are given, show them (note that this isn’t really raw pathnames, but rather a list of patterns to match). Otherwise implicitly uses the root level of the tree as the sole path argument. Output format ------------- The output format of `ls-tree` is determined by either the `--format` option, or other format-altering options such as `--name-only` etc. (see `--format` above). The use of certain `--format` directives is equivalent to using those options, but invoking the full formatting machinery can be slower than using an appropriate formatting option. In cases where the `--format` would exactly map to an existing option `ls-tree` will use the appropriate faster path. Thus the default format is equivalent to: ``` %(objectmode) %(objecttype) %(objectname)%x09%(path) ``` This output format is compatible with what `--index-info --stdin` of `git update-index` expects. When the `-l` option is used, format changes to ``` %(objectmode) %(objecttype) %(objectname) %(objectsize:padded)%x09%(path) ``` Object size identified by <objectname> is given in bytes, and right-justified with minimum width of 7 characters. Object size is given only for blobs (file) entries; for other entries `-` character is used in place of size. Without the `-z` option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). Using `-z` the filename is output verbatim and the line is terminated by a NUL byte. Customized format: It is possible to print in a custom format by using the `--format` option, which is able to interpolate different fields using a `%(fieldname)` notation. For example, if you only care about the "objectname" and "path" fields, you can execute with a specific "--format" like ``` git ls-tree --format='%(objectname) %(path)' <tree-ish> ``` Field names ----------- Various values from structured fields can be used to interpolate into the resulting output. For each outputing line, the following names can be used: objectmode The mode of the object. objecttype The type of the object (`commit`, `blob` or `tree`). objectname The name of the object. objectsize[:padded] The size of a `blob` object ("-" if it’s a `commit` or `tree`). It also supports a padded format of size with "%(objectsize:padded)". path The pathname of the object. git gittutorial-2 gittutorial-2 ============= Name ---- gittutorial-2 - A tutorial introduction to Git: part two Synopsis -------- ``` git * ``` Description ----------- You should work through [gittutorial[7]](gittutorial) before reading this tutorial. The goal of this tutorial is to introduce two fundamental pieces of Git’s architecture—​the object database and the index file—​and to provide the reader with everything necessary to understand the rest of the Git documentation. The git object database ----------------------- Let’s start a new project and create a small amount of history: ``` $ mkdir test-project $ cd test-project $ git init Initialized empty Git repository in .git/ $ echo 'hello world' > file.txt $ git add . $ git commit -a -m "initial commit" [master (root-commit) 54196cc] initial commit 1 file changed, 1 insertion(+) create mode 100644 file.txt $ echo 'hello world!' >file.txt $ git commit -a -m "add emphasis" [master c4d59f3] add emphasis 1 file changed, 1 insertion(+), 1 deletion(-) ``` What are the 7 digits of hex that Git responded to the commit with? We saw in part one of the tutorial that commits have names like this. It turns out that every object in the Git history is stored under a 40-digit hex name. That name is the SHA-1 hash of the object’s contents; among other things, this ensures that Git will never store the same data twice (since identical data is given an identical SHA-1 name), and that the contents of a Git object will never change (since that would change the object’s name as well). The 7 char hex strings here are simply the abbreviation of such 40 character long strings. Abbreviations can be used everywhere where the 40 character strings can be used, so long as they are unambiguous. It is expected that the content of the commit object you created while following the example above generates a different SHA-1 hash than the one shown above because the commit object records the time when it was created and the name of the person performing the commit. We can ask Git about this particular object with the `cat-file` command. Don’t copy the 40 hex digits from this example but use those from your own version. Note that you can shorten it to only a few characters to save yourself typing all 40 hex digits: ``` $ git cat-file -t 54196cc2 commit $ git cat-file commit 54196cc2 tree 92b8b694ffb1675e5975148e1121810081dbdffe author J. Bruce Fields <[email protected]> 1143414668 -0500 committer J. Bruce Fields <[email protected]> 1143414668 -0500 initial commit ``` A tree can refer to one or more "blob" objects, each corresponding to a file. In addition, a tree can also refer to other tree objects, thus creating a directory hierarchy. You can examine the contents of any tree using ls-tree (remember that a long enough initial portion of the SHA-1 will also work): ``` $ git ls-tree 92b8b694 100644 blob 3b18e512dba79e4c8300dd08aeb37f8e728b8dad file.txt ``` Thus we see that this tree has one file in it. The SHA-1 hash is a reference to that file’s data: ``` $ git cat-file -t 3b18e512 blob ``` A "blob" is just file data, which we can also examine with cat-file: ``` $ git cat-file blob 3b18e512 hello world ``` Note that this is the old file data; so the object that Git named in its response to the initial tree was a tree with a snapshot of the directory state that was recorded by the first commit. All of these objects are stored under their SHA-1 names inside the Git directory: ``` $ find .git/objects/ .git/objects/ .git/objects/pack .git/objects/info .git/objects/3b .git/objects/3b/18e512dba79e4c8300dd08aeb37f8e728b8dad .git/objects/92 .git/objects/92/b8b694ffb1675e5975148e1121810081dbdffe .git/objects/54 .git/objects/54/196cc2703dc165cbd373a65a4dcf22d50ae7f7 .git/objects/a0 .git/objects/a0/423896973644771497bdc03eb99d5281615b51 .git/objects/d0 .git/objects/d0/492b368b66bdabf2ac1fd8c92b39d3db916e59 .git/objects/c4 .git/objects/c4/d59f390b9cfd4318117afde11d601c1085f241 ``` and the contents of these files is just the compressed data plus a header identifying their length and their type. The type is either a blob, a tree, a commit, or a tag. The simplest commit to find is the HEAD commit, which we can find from .git/HEAD: ``` $ cat .git/HEAD ref: refs/heads/master ``` As you can see, this tells us which branch we’re currently on, and it tells us this by naming a file under the .git directory, which itself contains a SHA-1 name referring to a commit object, which we can examine with cat-file: ``` $ cat .git/refs/heads/master c4d59f390b9cfd4318117afde11d601c1085f241 $ git cat-file -t c4d59f39 commit $ git cat-file commit c4d59f39 tree d0492b368b66bdabf2ac1fd8c92b39d3db916e59 parent 54196cc2703dc165cbd373a65a4dcf22d50ae7f7 author J. Bruce Fields <[email protected]> 1143418702 -0500 committer J. Bruce Fields <[email protected]> 1143418702 -0500 add emphasis ``` The "tree" object here refers to the new state of the tree: ``` $ git ls-tree d0492b36 100644 blob a0423896973644771497bdc03eb99d5281615b51 file.txt $ git cat-file blob a0423896 hello world! ``` and the "parent" object refers to the previous commit: ``` $ git cat-file commit 54196cc2 tree 92b8b694ffb1675e5975148e1121810081dbdffe author J. Bruce Fields <[email protected]> 1143414668 -0500 committer J. Bruce Fields <[email protected]> 1143414668 -0500 initial commit ``` The tree object is the tree we examined first, and this commit is unusual in that it lacks any parent. Most commits have only one parent, but it is also common for a commit to have multiple parents. In that case the commit represents a merge, with the parent references pointing to the heads of the merged branches. Besides blobs, trees, and commits, the only remaining type of object is a "tag", which we won’t discuss here; refer to [git-tag[1]](git-tag) for details. So now we know how Git uses the object database to represent a project’s history: * "commit" objects refer to "tree" objects representing the snapshot of a directory tree at a particular point in the history, and refer to "parent" commits to show how they’re connected into the project history. * "tree" objects represent the state of a single directory, associating directory names to "blob" objects containing file data and "tree" objects containing subdirectory information. * "blob" objects contain file data without any other structure. * References to commit objects at the head of each branch are stored in files under .git/refs/heads/. * The name of the current branch is stored in .git/HEAD. Note, by the way, that lots of commands take a tree as an argument. But as we can see above, a tree can be referred to in many different ways—​by the SHA-1 name for that tree, by the name of a commit that refers to the tree, by the name of a branch whose head refers to that tree, etc.--and most such commands can accept any of these names. In command synopses, the word "tree-ish" is sometimes used to designate such an argument. The index file -------------- The primary tool we’ve been using to create commits is `git-commit -a`, which creates a commit including every change you’ve made to your working tree. But what if you want to commit changes only to certain files? Or only certain changes to certain files? If we look at the way commits are created under the cover, we’ll see that there are more flexible ways creating commits. Continuing with our test-project, let’s modify file.txt again: ``` $ echo "hello world, again" >>file.txt ``` but this time instead of immediately making the commit, let’s take an intermediate step, and ask for diffs along the way to keep track of what’s happening: ``` $ git diff --- a/file.txt +++ b/file.txt @@ -1 +1,2 @@ hello world! +hello world, again $ git add file.txt $ git diff ``` The last diff is empty, but no new commits have been made, and the head still doesn’t contain the new line: ``` $ git diff HEAD diff --git a/file.txt b/file.txt index a042389..513feba 100644 --- a/file.txt +++ b/file.txt @@ -1 +1,2 @@ hello world! +hello world, again ``` So `git diff` is comparing against something other than the head. The thing that it’s comparing against is actually the index file, which is stored in .git/index in a binary format, but whose contents we can examine with ls-files: ``` $ git ls-files --stage 100644 513feba2e53ebbd2532419ded848ba19de88ba00 0 file.txt $ git cat-file -t 513feba2 blob $ git cat-file blob 513feba2 hello world! hello world, again ``` So what our `git add` did was store a new blob and then put a reference to it in the index file. If we modify the file again, we’ll see that the new modifications are reflected in the `git diff` output: ``` $ echo 'again?' >>file.txt $ git diff index 513feba..ba3da7b 100644 --- a/file.txt +++ b/file.txt @@ -1,2 +1,3 @@ hello world! hello world, again +again? ``` With the right arguments, `git diff` can also show us the difference between the working directory and the last commit, or between the index and the last commit: ``` $ git diff HEAD diff --git a/file.txt b/file.txt index a042389..ba3da7b 100644 --- a/file.txt +++ b/file.txt @@ -1 +1,3 @@ hello world! +hello world, again +again? $ git diff --cached diff --git a/file.txt b/file.txt index a042389..513feba 100644 --- a/file.txt +++ b/file.txt @@ -1 +1,2 @@ hello world! +hello world, again ``` At any time, we can create a new commit using `git commit` (without the "-a" option), and verify that the state committed only includes the changes stored in the index file, not the additional change that is still only in our working tree: ``` $ git commit -m "repeat" $ git diff HEAD diff --git a/file.txt b/file.txt index 513feba..ba3da7b 100644 --- a/file.txt +++ b/file.txt @@ -1,2 +1,3 @@ hello world! hello world, again +again? ``` So by default `git commit` uses the index to create the commit, not the working tree; the "-a" option to commit tells it to first update the index with all changes in the working tree. Finally, it’s worth looking at the effect of `git add` on the index file: ``` $ echo "goodbye, world" >closing.txt $ git add closing.txt ``` The effect of the `git add` was to add one entry to the index file: ``` $ git ls-files --stage 100644 8b9743b20d4b15be3955fc8d5cd2b09cd2336138 0 closing.txt 100644 513feba2e53ebbd2532419ded848ba19de88ba00 0 file.txt ``` And, as you can see with cat-file, this new entry refers to the current contents of the file: ``` $ git cat-file blob 8b9743b2 goodbye, world ``` The "status" command is a useful way to get a quick summary of the situation: ``` $ git status On branch master Changes to be committed: (use "git restore --staged <file>..." to unstage) new file: closing.txt Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: file.txt ``` Since the current state of closing.txt is cached in the index file, it is listed as "Changes to be committed". Since file.txt has changes in the working directory that aren’t reflected in the index, it is marked "changed but not updated". At this point, running "git commit" would create a commit that added closing.txt (with its new contents), but that didn’t modify file.txt. Also, note that a bare `git diff` shows the changes to file.txt, but not the addition of closing.txt, because the version of closing.txt in the index file is identical to the one in the working directory. In addition to being the staging area for new commits, the index file is also populated from the object database when checking out a branch, and is used to hold the trees involved in a merge operation. See [gitcore-tutorial[7]](gitcore-tutorial) and the relevant man pages for details. What next? ---------- At this point you should know everything necessary to read the man pages for any of the git commands; one good place to start would be with the commands mentioned in [giteveryday[7]](giteveryday). You should be able to find any unknown jargon in [gitglossary[7]](gitglossary). The [Git User’s Manual](user-manual) provides a more comprehensive introduction to Git. [gitcvs-migration[7]](gitcvs-migration) explains how to import a CVS repository into Git, and shows how to use Git in a CVS-like way. For some interesting examples of Git use, see the [howtos](howto-index). For Git developers, [gitcore-tutorial[7]](gitcore-tutorial) goes into detail on the lower-level Git mechanisms involved in, for example, creating a new commit. See also -------- [gittutorial[7]](gittutorial), [gitcvs-migration[7]](gitcvs-migration), [gitcore-tutorial[7]](gitcore-tutorial), [gitglossary[7]](gitglossary), [git-help[1]](git-help), [giteveryday[7]](giteveryday), [The Git User’s Manual](user-manual)
programming_docs
git git-bisect git-bisect ========== Name ---- git-bisect - Use binary search to find the commit that introduced a bug Synopsis -------- ``` git bisect <subcommand> <options> ``` Description ----------- The command takes various subcommands, and different options depending on the subcommand: ``` git bisect start [--term-{new,bad}=<term> --term-{old,good}=<term>] [--no-checkout] [--first-parent] [<bad> [<good>...]] [--] [<paths>...] git bisect (bad|new|<term-new>) [<rev>] git bisect (good|old|<term-old>) [<rev>...] git bisect terms [--term-good | --term-bad] git bisect skip [(<rev>|<range>)...] git bisect reset [<commit>] git bisect (visualize|view) git bisect replay <logfile> git bisect log git bisect run <cmd>... git bisect help ``` This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a "bad" commit that is known to contain the bug, and a "good" commit that is known to be before the bug was introduced. Then `git bisect` picks a commit between those two endpoints and asks you whether the selected commit is "good" or "bad". It continues narrowing down the range until it finds the exact commit that introduced the change. In fact, `git bisect` can be used to find the commit that changed **any** property of your project; e.g., the commit that fixed a bug, or the commit that caused a benchmark’s performance to improve. To support this more general usage, the terms "old" and "new" can be used in place of "good" and "bad", or you can choose your own terms. See section "Alternate terms" below for more information. ### Basic bisect commands: start, bad, good As an example, suppose you are trying to find the commit that broke a feature that was known to work in version `v2.6.13-rc2` of your project. You start a bisect session as follows: ``` $ git bisect start $ git bisect bad # Current version is bad $ git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good ``` Once you have specified at least one bad and one good commit, `git bisect` selects a commit in the middle of that range of history, checks it out, and outputs something similar to the following: ``` Bisecting: 675 revisions left to test after this (roughly 10 steps) ``` You should now compile the checked-out version and test it. If that version works correctly, type ``` $ git bisect good ``` If that version is broken, type ``` $ git bisect bad ``` Then `git bisect` will respond with something like ``` Bisecting: 337 revisions left to test after this (roughly 9 steps) ``` Keep repeating the process: compile the tree, test it, and depending on whether it is good or bad run `git bisect good` or `git bisect bad` to ask for the next commit that needs testing. Eventually there will be no more revisions left to inspect, and the command will print out a description of the first bad commit. The reference `refs/bisect/bad` will be left pointing at that commit. ### Bisect reset After a bisect session, to clean up the bisection state and return to the original HEAD, issue the following command: ``` $ git bisect reset ``` By default, this will return your tree to the commit that was checked out before `git bisect start`. (A new `git bisect start` will also do that, as it cleans up the old bisection state.) With an optional argument, you can return to a different commit instead: ``` $ git bisect reset <commit> ``` For example, `git bisect reset bisect/bad` will check out the first bad revision, while `git bisect reset HEAD` will leave you on the current bisection commit and avoid switching commits at all. ### Alternate terms Sometimes you are not looking for the commit that introduced a breakage, but rather for a commit that caused a change between some other "old" state and "new" state. For example, you might be looking for the commit that introduced a particular fix. Or you might be looking for the first commit in which the source-code filenames were finally all converted to your company’s naming standard. Or whatever. In such cases it can be very confusing to use the terms "good" and "bad" to refer to "the state before the change" and "the state after the change". So instead, you can use the terms "old" and "new", respectively, in place of "good" and "bad". (But note that you cannot mix "good" and "bad" with "old" and "new" in a single session.) In this more general usage, you provide `git bisect` with a "new" commit that has some property and an "old" commit that doesn’t have that property. Each time `git bisect` checks out a commit, you test if that commit has the property. If it does, mark the commit as "new"; otherwise, mark it as "old". When the bisection is done, `git bisect` will report which commit introduced the property. To use "old" and "new" instead of "good" and bad, you must run `git bisect start` without commits as argument and then run the following commands to add the commits: ``` git bisect old [<rev>] ``` to indicate that a commit was before the sought change, or ``` git bisect new [<rev>...] ``` to indicate that it was after. To get a reminder of the currently used terms, use ``` git bisect terms ``` You can get just the old (respectively new) term with `git bisect terms --term-old` or `git bisect terms --term-good`. If you would like to use your own terms instead of "bad"/"good" or "new"/"old", you can choose any names you like (except existing bisect subcommands like `reset`, `start`, …​) by starting the bisection using ``` git bisect start --term-old <term-old> --term-new <term-new> ``` For example, if you are looking for a commit that introduced a performance regression, you might use ``` git bisect start --term-old fast --term-new slow ``` Or if you are looking for the commit that fixed a bug, you might use ``` git bisect start --term-new fixed --term-old broken ``` Then, use `git bisect <term-old>` and `git bisect <term-new>` instead of `git bisect good` and `git bisect bad` to mark commits. ### Bisect visualize/view To see the currently remaining suspects in `gitk`, issue the following command during the bisection process (the subcommand `view` can be used as an alternative to `visualize`): ``` $ git bisect visualize ``` If the `DISPLAY` environment variable is not set, `git log` is used instead. You can also give command-line options such as `-p` and `--stat`. ``` $ git bisect visualize --stat ``` ### Bisect log and bisect replay After having marked revisions as good or bad, issue the following command to show what has been done so far: ``` $ git bisect log ``` If you discover that you made a mistake in specifying the status of a revision, you can save the output of this command to a file, edit it to remove the incorrect entries, and then issue the following commands to return to a corrected state: ``` $ git bisect reset $ git bisect replay that-file ``` ### Avoiding testing a commit If, in the middle of a bisect session, you know that the suggested revision is not a good one to test (e.g. it fails to build and you know that the failure does not have anything to do with the bug you are chasing), you can manually select a nearby commit and test that one instead. For example: ``` $ git bisect good/bad # previous round was good or bad. Bisecting: 337 revisions left to test after this (roughly 9 steps) $ git bisect visualize # oops, that is uninteresting. $ git reset --hard HEAD~3 # try 3 revisions before what # was suggested ``` Then compile and test the chosen revision, and afterwards mark the revision as good or bad in the usual manner. ### Bisect skip Instead of choosing a nearby commit by yourself, you can ask Git to do it for you by issuing the command: ``` $ git bisect skip # Current version cannot be tested ``` However, if you skip a commit adjacent to the one you are looking for, Git will be unable to tell exactly which of those commits was the first bad one. You can also skip a range of commits, instead of just one commit, using range notation. For example: ``` $ git bisect skip v2.5..v2.6 ``` This tells the bisect process that no commit after `v2.5`, up to and including `v2.6`, should be tested. Note that if you also want to skip the first commit of the range you would issue the command: ``` $ git bisect skip v2.5 v2.5..v2.6 ``` This tells the bisect process that the commits between `v2.5` and `v2.6` (inclusive) should be skipped. ### Cutting down bisection by giving more parameters to bisect start You can further cut down the number of trials, if you know what part of the tree is involved in the problem you are tracking down, by specifying path parameters when issuing the `bisect start` command: ``` $ git bisect start -- arch/i386 include/asm-i386 ``` If you know beforehand more than one good commit, you can narrow the bisect space down by specifying all of the good commits immediately after the bad commit when issuing the `bisect start` command: ``` $ git bisect start v2.6.20-rc6 v2.6.20-rc4 v2.6.20-rc1 -- # v2.6.20-rc6 is bad # v2.6.20-rc4 and v2.6.20-rc1 are good ``` ### Bisect run If you have a script that can tell if the current source code is good or bad, you can bisect by issuing the command: ``` $ git bisect run my_script arguments ``` Note that the script (`my_script` in the above example) should exit with code 0 if the current source code is good/old, and exit with a code between 1 and 127 (inclusive), except 125, if the current source code is bad/new. Any other exit code will abort the bisect process. It should be noted that a program that terminates via `exit(-1)` leaves $? = 255, (see the exit(3) manual page), as the value is chopped with `& 0377`. The special exit code 125 should be used when the current source code cannot be tested. If the script exits with this code, the current revision will be skipped (see `git bisect skip` above). 125 was chosen as the highest sensible value to use for this purpose, because 126 and 127 are used by POSIX shells to signal specific error status (127 is for command not found, 126 is for command found but not executable—​these details do not matter, as they are normal errors in the script, as far as `bisect run` is concerned). You may often find that during a bisect session you want to have temporary modifications (e.g. s/#define DEBUG 0/#define DEBUG 1/ in a header file, or "revision that does not have this commit needs this patch applied to work around another problem this bisection is not interested in") applied to the revision being tested. To cope with such a situation, after the inner `git bisect` finds the next revision to test, the script can apply the patch before compiling, run the real test, and afterwards decide if the revision (possibly with the needed patch) passed the test and then rewind the tree to the pristine state. Finally the script should exit with the status of the real test to let the `git bisect run` command loop determine the eventual outcome of the bisect session. Options ------- --no-checkout Do not checkout the new working tree at each iteration of the bisection process. Instead just update a special reference named `BISECT_HEAD` to make it point to the commit that should be tested. This option may be useful when the test you would perform in each step does not require a checked out tree. If the repository is bare, `--no-checkout` is assumed. --first-parent Follow only the first parent commit upon seeing a merge commit. In detecting regressions introduced through the merging of a branch, the merge commit will be identified as introduction of the bug and its ancestors will be ignored. This option is particularly useful in avoiding false positives when a merged branch contained broken or non-buildable commits, but the merge itself was OK. Examples -------- * Automatically bisect a broken build between v1.2 and HEAD: ``` $ git bisect start HEAD v1.2 -- # HEAD is bad, v1.2 is good $ git bisect run make # "make" builds the app $ git bisect reset # quit the bisect session ``` * Automatically bisect a test failure between origin and HEAD: ``` $ git bisect start HEAD origin -- # HEAD is bad, origin is good $ git bisect run make test # "make test" builds and tests $ git bisect reset # quit the bisect session ``` * Automatically bisect a broken test case: ``` $ cat ~/test.sh #!/bin/sh make || exit 125 # this skips broken builds ~/check_test_case.sh # does the test case pass? $ git bisect start HEAD HEAD~10 -- # culprit is among the last 10 $ git bisect run ~/test.sh $ git bisect reset # quit the bisect session ``` Here we use a `test.sh` custom script. In this script, if `make` fails, we skip the current commit. `check_test_case.sh` should `exit 0` if the test case passes, and `exit 1` otherwise. It is safer if both `test.sh` and `check_test_case.sh` are outside the repository to prevent interactions between the bisect, make and test processes and the scripts. * Automatically bisect with temporary modifications (hot-fix): ``` $ cat ~/test.sh #!/bin/sh # tweak the working tree by merging the hot-fix branch # and then attempt a build if git merge --no-commit --no-ff hot-fix && make then # run project specific test and report its status ~/check_test_case.sh status=$? else # tell the caller this is untestable status=125 fi # undo the tweak to allow clean flipping to the next commit git reset --hard # return control exit $status ``` This applies modifications from a hot-fix branch before each test run, e.g. in case your build or test environment changed so that older revisions may need a fix which newer ones have already. (Make sure the hot-fix branch is based off a commit which is contained in all revisions which you are bisecting, so that the merge does not pull in too much, or use `git cherry-pick` instead of `git merge`.) * Automatically bisect a broken test case: ``` $ git bisect start HEAD HEAD~10 -- # culprit is among the last 10 $ git bisect run sh -c "make || exit 125; ~/check_test_case.sh" $ git bisect reset # quit the bisect session ``` This shows that you can do without a run script if you write the test on a single line. * Locate a good region of the object graph in a damaged repository ``` $ git bisect start HEAD <known-good-commit> [ <boundary-commit> ... ] --no-checkout $ git bisect run sh -c ' GOOD=$(git for-each-ref "--format=%(objectname)" refs/bisect/good-*) && git rev-list --objects BISECT_HEAD --not $GOOD >tmp.$$ && git pack-objects --stdout >/dev/null <tmp.$$ rc=$? rm -f tmp.$$ test $rc = 0' $ git bisect reset # quit the bisect session ``` In this case, when `git bisect run` finishes, bisect/bad will refer to a commit that has at least one parent whose reachable graph is fully traversable in the sense required by `git pack objects`. * Look for a fix instead of a regression in the code ``` $ git bisect start $ git bisect new HEAD # current commit is marked as new $ git bisect old HEAD~10 # the tenth commit from now is marked as old ``` or: ``` $ git bisect start --term-old broken --term-new fixed $ git bisect fixed $ git bisect broken HEAD~10 ``` ### Getting help Use `git bisect` to get a short usage description, and `git bisect help` or `git bisect -h` to get a long usage description. See also -------- [Fighting regressions with git bisect](git-bisect-lk2009), [git-blame[1]](git-blame). git giteveryday giteveryday =========== Name ---- giteveryday - A useful minimum set of commands for Everyday Git Synopsis -------- Everyday Git With 20 Commands Or So Description ----------- Git users can broadly be grouped into four categories for the purposes of describing here a small set of useful command for everyday Git. * [Individual Developer (Standalone)](#STANDALONE) commands are essential for anybody who makes a commit, even for somebody who works alone. * If you work with other people, you will need commands listed in the [Individual Developer (Participant)](#PARTICIPANT) section as well. * People who play the [Integrator](#INTEGRATOR) role need to learn some more commands in addition to the above. * [Repository Administration](#ADMINISTRATION) commands are for system administrators who are responsible for the care and feeding of Git repositories. Individual developer (standalone) --------------------------------- A standalone individual developer does not exchange patches with other people, and works alone in a single repository, using the following commands. * [git-init[1]](git-init) to create a new repository. * [git-log[1]](git-log) to see what happened. * [git-switch[1]](git-switch) and [git-branch[1]](git-branch) to switch branches. * [git-add[1]](git-add) to manage the index file. * [git-diff[1]](git-diff) and [git-status[1]](git-status) to see what you are in the middle of doing. * [git-commit[1]](git-commit) to advance the current branch. * [git-restore[1]](git-restore) to undo changes. * [git-merge[1]](git-merge) to merge between local branches. * [git-rebase[1]](git-rebase) to maintain topic branches. * [git-tag[1]](git-tag) to mark a known point. ### Examples Use a tarball as a starting point for a new repository. ``` $ tar zxf frotz.tar.gz $ cd frotz $ git init $ git add . (1) $ git commit -m "import of frotz source tree." $ git tag v2.43 (2) ``` 1. add everything under the current directory. 2. make a lightweight, unannotated tag. Create a topic branch and develop. ``` $ git switch -c alsa-audio (1) $ edit/compile/test $ git restore curses/ux_audio_oss.c (2) $ git add curses/ux_audio_alsa.c (3) $ edit/compile/test $ git diff HEAD (4) $ git commit -a -s (5) $ edit/compile/test $ git diff HEAD^ (6) $ git commit -a --amend (7) $ git switch master (8) $ git merge alsa-audio (9) $ git log --since='3 days ago' (10) $ git log v2.43.. curses/ (11) ``` 1. create a new topic branch. 2. revert your botched changes in `curses/ux_audio_oss.c`. 3. you need to tell Git if you added a new file; removal and modification will be caught if you do `git commit -a` later. 4. to see what changes you are committing. 5. commit everything, as you have tested, with your sign-off. 6. look at all your changes including the previous commit. 7. amend the previous commit, adding all your new changes, using your original message. 8. switch to the master branch. 9. merge a topic branch into your master branch. 10. review commit logs; other forms to limit output can be combined and include `-10` (to show up to 10 commits), `--until=2005-12-10`, etc. 11. view only the changes that touch what’s in `curses/` directory, since `v2.43` tag. Individual developer (participant) ---------------------------------- A developer working as a participant in a group project needs to learn how to communicate with others, and uses these commands in addition to the ones needed by a standalone developer. * [git-clone[1]](git-clone) from the upstream to prime your local repository. * [git-pull[1]](git-pull) and [git-fetch[1]](git-fetch) from "origin" to keep up-to-date with the upstream. * [git-push[1]](git-push) to shared repository, if you adopt CVS style shared repository workflow. * [git-format-patch[1]](git-format-patch) to prepare e-mail submission, if you adopt Linux kernel-style public forum workflow. * [git-send-email[1]](git-send-email) to send your e-mail submission without corruption by your MUA. * [git-request-pull[1]](git-request-pull) to create a summary of changes for your upstream to pull. ### Examples Clone the upstream and work on it. Feed changes to upstream. ``` $ git clone git://git.kernel.org/pub/scm/.../torvalds/linux-2.6 my2.6 $ cd my2.6 $ git switch -c mine master (1) $ edit/compile/test; git commit -a -s (2) $ git format-patch master (3) $ git send-email --to="person <[email protected]>" 00*.patch (4) $ git switch master (5) $ git pull (6) $ git log -p ORIG_HEAD.. arch/i386 include/asm-i386 (7) $ git ls-remote --heads http://git.kernel.org/.../jgarzik/libata-dev.git (8) $ git pull git://git.kernel.org/pub/.../jgarzik/libata-dev.git ALL (9) $ git reset --hard ORIG_HEAD (10) $ git gc (11) ``` 1. checkout a new branch `mine` from master. 2. repeat as needed. 3. extract patches from your branch, relative to master, 4. and email them. 5. return to `master`, ready to see what’s new 6. `git pull` fetches from `origin` by default and merges into the current branch. 7. immediately after pulling, look at the changes done upstream since last time we checked, only in the area we are interested in. 8. check the branch names in an external repository (if not known). 9. fetch from a specific branch `ALL` from a specific repository and merge it. 10. revert the pull. 11. garbage collect leftover objects from reverted pull. Push into another repository. ``` satellite$ git clone mothership:frotz frotz (1) satellite$ cd frotz satellite$ git config --get-regexp '^(remote|branch)\.' (2) remote.origin.url mothership:frotz remote.origin.fetch refs/heads/*:refs/remotes/origin/* branch.master.remote origin branch.master.merge refs/heads/master satellite$ git config remote.origin.push \ +refs/heads/*:refs/remotes/satellite/* (3) satellite$ edit/compile/test/commit satellite$ git push origin (4) mothership$ cd frotz mothership$ git switch master mothership$ git merge satellite/master (5) ``` 1. mothership machine has a frotz repository under your home directory; clone from it to start a repository on the satellite machine. 2. clone sets these configuration variables by default. It arranges `git pull` to fetch and store the branches of mothership machine to local `remotes/origin/*` remote-tracking branches. 3. arrange `git push` to push all local branches to their corresponding branch of the mothership machine. 4. push will stash all our work away on `remotes/satellite/*` remote-tracking branches on the mothership machine. You could use this as a back-up method. Likewise, you can pretend that mothership "fetched" from you (useful when access is one sided). 5. on mothership machine, merge the work done on the satellite machine into the master branch. Branch off of a specific tag. ``` $ git switch -c private2.6.14 v2.6.14 (1) $ edit/compile/test; git commit -a $ git checkout master $ git cherry-pick v2.6.14..private2.6.14 (2) ``` 1. create a private branch based on a well known (but somewhat behind) tag. 2. forward port all changes in `private2.6.14` branch to `master` branch without a formal "merging". Or longhand `git format-patch -k -m --stdout v2.6.14..private2.6.14 | git am -3 -k` An alternate participant submission mechanism is using the `git request-pull` or pull-request mechanisms (e.g as used on GitHub (www.github.com) to notify your upstream of your contribution. Integrator ---------- A fairly central person acting as the integrator in a group project receives changes made by others, reviews and integrates them and publishes the result for others to use, using these commands in addition to the ones needed by participants. This section can also be used by those who respond to `git request-pull` or pull-request on GitHub (www.github.com) to integrate the work of others into their history. A sub-area lieutenant for a repository will act both as a participant and as an integrator. * [git-am[1]](git-am) to apply patches e-mailed in from your contributors. * [git-pull[1]](git-pull) to merge from your trusted lieutenants. * [git-format-patch[1]](git-format-patch) to prepare and send suggested alternative to contributors. * [git-revert[1]](git-revert) to undo botched commits. * [git-push[1]](git-push) to publish the bleeding edge. ### Examples A typical integrator’s Git day. ``` $ git status (1) $ git branch --no-merged master (2) $ mailx (3) & s 2 3 4 5 ./+to-apply & s 7 8 ./+hold-linus & q $ git switch -c topic/one master $ git am -3 -i -s ./+to-apply (4) $ compile/test $ git switch -c hold/linus && git am -3 -i -s ./+hold-linus (5) $ git switch topic/one && git rebase master (6) $ git switch -C seen next (7) $ git merge topic/one topic/two && git merge hold/linus (8) $ git switch maint $ git cherry-pick master~4 (9) $ compile/test $ git tag -s -m "GIT 0.99.9x" v0.99.9x (10) $ git fetch ko && for branch in master maint next seen (11) do git show-branch ko/$branch $branch (12) done $ git push --follow-tags ko (13) ``` 1. see what you were in the middle of doing, if anything. 2. see which branches haven’t been merged into `master` yet. Likewise for any other integration branches e.g. `maint`, `next` and `seen`. 3. read mails, save ones that are applicable, and save others that are not quite ready (other mail readers are available). 4. apply them, interactively, with your sign-offs. 5. create topic branch as needed and apply, again with sign-offs. 6. rebase internal topic branch that has not been merged to the master or exposed as a part of a stable branch. 7. restart `seen` every time from the next. 8. and bundle topic branches still cooking. 9. backport a critical fix. 10. create a signed tag. 11. make sure master was not accidentally rewound beyond that already pushed out. 12. In the output from `git show-branch`, `master` should have everything `ko/master` has, and `next` should have everything `ko/next` has, etc. 13. push out the bleeding edge, together with new tags that point into the pushed history. In this example, the `ko` shorthand points at the Git maintainer’s repository at kernel.org, and looks like this: ``` (in .git/config) [remote "ko"] url = kernel.org:/pub/scm/git/git.git fetch = refs/heads/*:refs/remotes/ko/* push = refs/heads/master push = refs/heads/next push = +refs/heads/seen push = refs/heads/maint ``` Repository administration ------------------------- A repository administrator uses the following tools to set up and maintain access to the repository by developers. * [git-daemon[1]](git-daemon) to allow anonymous download from repository. * [git-shell[1]](git-shell) can be used as a `restricted login shell` for shared central repository users. * [git-http-backend[1]](git-http-backend) provides a server side implementation of Git-over-HTTP ("Smart http") allowing both fetch and push services. * [gitweb[1]](gitweb) provides a web front-end to Git repositories, which can be set-up using the [git-instaweb[1]](git-instaweb) script. [update hook howto](https://git-scm.com/docs/howto/update-hook-example) has a good example of managing a shared central repository. In addition there are a number of other widely deployed hosting, browsing and reviewing solutions such as: * gitolite, gerrit code review, cgit and others. ### Examples We assume the following in /etc/services ``` $ grep 9418 /etc/services git 9418/tcp # Git Version Control System ``` Run git-daemon to serve /pub/scm from inetd. ``` $ grep git /etc/inetd.conf git stream tcp nowait nobody \ /usr/bin/git-daemon git-daemon --inetd --export-all /pub/scm ``` The actual configuration line should be on one line. Run git-daemon to serve /pub/scm from xinetd. ``` $ cat /etc/xinetd.d/git-daemon # default: off # description: The Git server offers access to Git repositories service git { disable = no type = UNLISTED port = 9418 socket_type = stream wait = no user = nobody server = /usr/bin/git-daemon server_args = --inetd --export-all --base-path=/pub/scm log_on_failure += USERID } ``` Check your xinetd(8) documentation and setup, this is from a Fedora system. Others might be different. Give push/pull only access to developers using git-over-ssh. e.g. those using: `$ git push/pull ssh://host.xz/pub/scm/project` ``` $ grep git /etc/passwd (1) alice:x:1000:1000::/home/alice:/usr/bin/git-shell bob:x:1001:1001::/home/bob:/usr/bin/git-shell cindy:x:1002:1002::/home/cindy:/usr/bin/git-shell david:x:1003:1003::/home/david:/usr/bin/git-shell $ grep git /etc/shells (2) /usr/bin/git-shell ``` 1. log-in shell is set to /usr/bin/git-shell, which does not allow anything but `git push` and `git pull`. The users require ssh access to the machine. 2. in many distributions /etc/shells needs to list what is used as the login shell. CVS-style shared repository. ``` $ grep git /etc/group (1) git:x:9418:alice,bob,cindy,david $ cd /home/devo.git $ ls -l (2) lrwxrwxrwx 1 david git 17 Dec 4 22:40 HEAD -> refs/heads/master drwxrwsr-x 2 david git 4096 Dec 4 22:40 branches -rw-rw-r-- 1 david git 84 Dec 4 22:40 config -rw-rw-r-- 1 david git 58 Dec 4 22:40 description drwxrwsr-x 2 david git 4096 Dec 4 22:40 hooks -rw-rw-r-- 1 david git 37504 Dec 4 22:40 index drwxrwsr-x 2 david git 4096 Dec 4 22:40 info drwxrwsr-x 4 david git 4096 Dec 4 22:40 objects drwxrwsr-x 4 david git 4096 Nov 7 14:58 refs drwxrwsr-x 2 david git 4096 Dec 4 22:40 remotes $ ls -l hooks/update (3) -r-xr-xr-x 1 david git 3536 Dec 4 22:40 update $ cat info/allowed-users (4) refs/heads/master alice\|cindy refs/heads/doc-update bob refs/tags/v[0-9]* david ``` 1. place the developers into the same git group. 2. and make the shared repository writable by the group. 3. use update-hook example by Carl from Documentation/howto/ for branch policy control. 4. alice and cindy can push into master, only bob can push into doc-update. david is the release manager and is the only person who can create and push version tags.
programming_docs
git gitformat-signature gitformat-signature =================== Name ---- gitformat-signature - Git cryptographic signature formats Synopsis -------- ``` <[tag|commit] object header(s)> <over-the-wire protocol> ``` Description ----------- Git uses cryptographic signatures in various places, currently objects (tags, commits, mergetags) and transactions (pushes). In every case, the command which is about to create an object or transaction determines a payload from that, calls gpg to obtain a detached signature for the payload (`gpg -bsa`) and embeds the signature into the object or transaction. Signatures always begin with `-----BEGIN PGP SIGNATURE-----` and end with `-----END PGP SIGNATURE-----`, unless gpg is told to produce RFC1991 signatures which use `MESSAGE` instead of `SIGNATURE`. Signatures sometimes appear as a part of the normal payload (e.g. a signed tag has the signature block appended after the payload that the signature applies to), and sometimes appear in the value of an object header (e.g. a merge commit that merged a signed tag would have the entire tag contents on its "mergetag" header). In the case of the latter, the usual multi-line formatting rule for object headers applies. I.e. the second and subsequent lines are prefixed with a SP to signal that the line is continued from the previous line. This is even true for an originally empty line. In the following examples, the end of line that ends with a whitespace letter is highlighted with a `$` sign; if you are trying to recreate these example by hand, do not cut and paste them---they are there primarily to highlight extra whitespace at the end of some lines. The signed payload and the way the signature is embedded depends on the type of the object resp. transaction. Tag signatures -------------- * created by: `git tag -s` * payload: annotated tag object * embedding: append the signature to the unsigned tag object * example: tag `signedtag` with subject `signed tag` ``` object 04b871796dc0420f8e7561a895b52484b701d51a type commit tag signedtag tagger C O Mitter <[email protected]> 1465981006 +0000 signed tag signed tag message body -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJXYRhOAAoJEGEJLoW3InGJklkIAIcnhL7RwEb/+QeX9enkXhxn rxfdqrvWd1K80sl2TOt8Bg/NYwrUBw/RWJ+sg/hhHp4WtvE1HDGHlkEz3y11Lkuh 8tSxS3qKTxXUGozyPGuE90sJfExhZlW4knIQ1wt/yWqM+33E9pN4hzPqLwyrdods q8FWEqPPUbSJXoMbRPw04S5jrLtZSsUWbRYjmJCHzlhSfFWW4eFd37uquIaLUBS0 rkC3Jrx7420jkIpgFcTI2s60uhSQLzgcCwdA2ukSYIRnjg/zDkj8+3h/GaROJ72x lZyI6HWixKJkWw8lE9aAOD9TmTW9sFJwcVAzmAuFX2kUreDUKMZduGcoRYGpD7E= =jpXa -----END PGP SIGNATURE----- ``` * verify with: `git verify-tag [-v]` or `git tag -v` ``` gpg: Signature made Wed Jun 15 10:56:46 2016 CEST using RSA key ID B7227189 gpg: Good signature from "Eris Discordia <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189 object 04b871796dc0420f8e7561a895b52484b701d51a type commit tag signedtag tagger C O Mitter <[email protected]> 1465981006 +0000 signed tag signed tag message body ``` Commit signatures ----------------- * created by: `git commit -S` * payload: commit object * embedding: header entry `gpgsig` (content is preceded by a space) * example: commit with subject `signed commit` ``` tree eebfed94e75e7760540d1485c740902590a00332 parent 04b871796dc0420f8e7561a895b52484b701d51a author A U Thor <[email protected]> 1465981137 +0000 committer C O Mitter <[email protected]> 1465981137 +0000 gpgsig -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 $ iQEcBAABAgAGBQJXYRjRAAoJEGEJLoW3InGJ3IwIAIY4SA6GxY3BjL60YyvsJPh/ HRCJwH+w7wt3Yc/9/bW2F+gF72kdHOOs2jfv+OZhq0q4OAN6fvVSczISY/82LpS7 DVdMQj2/YcHDT4xrDNBnXnviDO9G7am/9OE77kEbXrp7QPxvhjkicHNwy2rEflAA zn075rtEERDHr8nRYiDh8eVrefSO7D+bdQ7gv+7GsYMsd2auJWi1dHOSfTr9HIF4 HJhWXT9d2f8W+diRYXGh4X0wYiGg6na/soXc+vdtDYBzIxanRqjg8jCAeo1eOTk1 EdTwhcTZlI0x5pvJ3H0+4hA2jtldVtmPM4OTB0cTrEWBad7XV6YgiyuII73Ve3I= =jKHM -----END PGP SIGNATURE----- signed commit signed commit message body ``` * verify with: `git verify-commit [-v]` (or `git show --show-signature`) ``` gpg: Signature made Wed Jun 15 10:58:57 2016 CEST using RSA key ID B7227189 gpg: Good signature from "Eris Discordia <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189 tree eebfed94e75e7760540d1485c740902590a00332 parent 04b871796dc0420f8e7561a895b52484b701d51a author A U Thor <[email protected]> 1465981137 +0000 committer C O Mitter <[email protected]> 1465981137 +0000 signed commit signed commit message body ``` Mergetag signatures ------------------- * created by: `git merge` on signed tag * payload/embedding: the whole signed tag object is embedded into the (merge) commit object as header entry `mergetag` * example: merge of the signed tag `signedtag` as above ``` tree c7b1cff039a93f3600a1d18b82d26688668c7dea parent c33429be94b5f2d3ee9b0adad223f877f174b05d parent 04b871796dc0420f8e7561a895b52484b701d51a author A U Thor <[email protected]> 1465982009 +0000 committer C O Mitter <[email protected]> 1465982009 +0000 mergetag object 04b871796dc0420f8e7561a895b52484b701d51a type commit tag signedtag tagger C O Mitter <[email protected]> 1465981006 +0000 $ signed tag $ signed tag message body -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 $ iQEcBAABAgAGBQJXYRhOAAoJEGEJLoW3InGJklkIAIcnhL7RwEb/+QeX9enkXhxn rxfdqrvWd1K80sl2TOt8Bg/NYwrUBw/RWJ+sg/hhHp4WtvE1HDGHlkEz3y11Lkuh 8tSxS3qKTxXUGozyPGuE90sJfExhZlW4knIQ1wt/yWqM+33E9pN4hzPqLwyrdods q8FWEqPPUbSJXoMbRPw04S5jrLtZSsUWbRYjmJCHzlhSfFWW4eFd37uquIaLUBS0 rkC3Jrx7420jkIpgFcTI2s60uhSQLzgcCwdA2ukSYIRnjg/zDkj8+3h/GaROJ72x lZyI6HWixKJkWw8lE9aAOD9TmTW9sFJwcVAzmAuFX2kUreDUKMZduGcoRYGpD7E= =jpXa -----END PGP SIGNATURE----- Merge tag 'signedtag' into downstream signed tag signed tag message body # gpg: Signature made Wed Jun 15 08:56:46 2016 UTC using RSA key ID B7227189 # gpg: Good signature from "Eris Discordia <[email protected]>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189 ``` * verify with: verification is embedded in merge commit message by default, alternatively with `git show --show-signature`: ``` commit 9863f0c76ff78712b6800e199a46aa56afbcbd49 merged tag 'signedtag' gpg: Signature made Wed Jun 15 10:56:46 2016 CEST using RSA key ID B7227189 gpg: Good signature from "Eris Discordia <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189 Merge: c33429b 04b8717 Author: A U Thor <[email protected]> Date: Wed Jun 15 09:13:29 2016 +0000 Merge tag 'signedtag' into downstream signed tag signed tag message body # gpg: Signature made Wed Jun 15 08:56:46 2016 UTC using RSA key ID B7227189 # gpg: Good signature from "Eris Discordia <[email protected]>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: D4BE 2231 1AD3 131E 5EDA 29A4 6109 2E85 B722 7189 ``` git git-fetch-pack git-fetch-pack ============== Name ---- git-fetch-pack - Receive missing objects from another repository Synopsis -------- ``` git fetch-pack [--all] [--quiet|-q] [--keep|-k] [--thin] [--include-tag] [--upload-pack=<git-upload-pack>] [--depth=<n>] [--no-progress] [-v] <repository> [<refs>…​] ``` Description ----------- Usually you would want to use `git fetch`, which is a higher level wrapper of this command, instead. Invokes `git-upload-pack` on a possibly remote repository and asks it to send objects missing from this repository, to update the named heads. The list of commits available locally is found out by scanning the local refs/ hierarchy and sent to `git-upload-pack` running on the other end. This command degenerates to download everything to complete the asked refs from the remote side when the local side does not have a common ancestor commit. Options ------- --all Fetch all remote refs. --stdin Take the list of refs from stdin, one per line. If there are refs specified on the command line in addition to this option, then the refs from stdin are processed after those on the command line. If `--stateless-rpc` is specified together with this option then the list of refs must be in packet format (pkt-line). Each ref must be in a separate packet, and the list must end with a flush packet. -q --quiet Pass `-q` flag to `git unpack-objects`; this makes the cloning process less verbose. -k --keep Do not invoke `git unpack-objects` on received data, but create a single packfile out of it instead, and store it in the object database. If provided twice then the pack is locked against repacking. --thin Fetch a "thin" pack, which records objects in deltified form based on objects not included in the pack to reduce network traffic. --include-tag If the remote side supports it, annotated tags objects will be downloaded on the same connection as the other objects if the object the tag references is downloaded. The caller must otherwise determine the tags this option made available. --upload-pack=<git-upload-pack> Use this to specify the path to `git-upload-pack` on the remote side, if is not found on your $PATH. Installations of sshd ignores the user’s environment setup scripts for login shells (e.g. .bash\_profile) and your privately installed git may not be found on the system default $PATH. Another workaround suggested is to set up your $PATH in ".bashrc", but this flag is for people who do not want to pay the overhead for non-interactive shells by having a lean .bashrc file (they set most of the things up in .bash\_profile). --exec=<git-upload-pack> Same as --upload-pack=<git-upload-pack>. --depth=<n> Limit fetching to ancestor-chains not longer than n. `git-upload-pack` treats the special depth 2147483647 as infinite even if there is an ancestor-chain that long. --shallow-since=<date> Deepen or shorten the history of a shallow repository to include all reachable commits after <date>. --shallow-exclude=<revision> Deepen or shorten the history of a shallow repository to exclude commits reachable from a specified remote branch or tag. This option can be specified multiple times. --deepen-relative Argument --depth specifies the number of commits from the current shallow boundary instead of from the tip of each remote branch history. --refetch Skips negotiating commits with the server in order to fetch all matching objects. Use to reapply a new partial clone blob/tree filter. --no-progress Do not show the progress. --check-self-contained-and-connected Output "connectivity-ok" if the received pack is self-contained and connected. -v Run verbosely. <repository> The URL to the remote repository. <refs>…​ The remote heads to update from. This is relative to $GIT\_DIR (e.g. "HEAD", "refs/heads/master"). When unspecified, update from all heads the remote side has. If the remote has enabled the options `uploadpack.allowTipSHA1InWant`, `uploadpack.allowReachableSHA1InWant`, or `uploadpack.allowAnySHA1InWant`, they may alternatively be 40-hex sha1s present on the remote. See also -------- [git-fetch[1]](git-fetch) git git-fsmonitor--daemon git-fsmonitor--daemon ===================== Name ---- git-fsmonitor—​daemon - A Built-in Filesystem Monitor Synopsis -------- ``` git fsmonitor--daemon start git fsmonitor--daemon run git fsmonitor--daemon stop git fsmonitor--daemon status ``` Description ----------- A daemon to watch the working directory for file and directory changes using platform-specific filesystem notification facilities. This daemon communicates directly with commands like `git status` using the [simple IPC](api-simple-ipc) interface instead of the slower [githooks[5]](githooks) interface. This daemon is built into Git so that no third-party tools are required. Options ------- start Starts a daemon in the background. run Runs a daemon in the foreground. stop Stops the daemon running in the current working directory, if present. status Exits with zero status if a daemon is watching the current working directory. Remarks ------- This daemon is a long running process used to watch a single working directory and maintain a list of the recently changed files and directories. Performance of commands such as `git status` can be increased if they just ask for a summary of changes to the working directory and can avoid scanning the disk. When `core.fsmonitor` is set to `true` (see [git-config[1]](git-config)) commands, such as `git status`, will ask the daemon for changes and automatically start it (if necessary). For more information see the "File System Monitor" section in [git-update-index[1]](git-update-index). Caveats ------- The fsmonitor daemon does not currently know about submodules and does not know to filter out filesystem events that happen within a submodule. If fsmonitor daemon is watching a super repo and a file is modified within the working directory of a submodule, it will report the change (as happening against the super repo). However, the client will properly ignore these extra events, so performance may be affected but it will not cause an incorrect result. By default, the fsmonitor daemon refuses to work against network-mounted repositories; this may be overridden by setting `fsmonitor.allowRemote` to `true`. Note, however, that the fsmonitor daemon is not guaranteed to work correctly with all network-mounted repositories and such use is considered experimental. On Mac OS, the inter-process communication (IPC) between various Git commands and the fsmonitor daemon is done via a Unix domain socket (UDS) — a special type of file — which is supported by native Mac OS filesystems, but not on network-mounted filesystems, NTFS, or FAT32. Other filesystems may or may not have the needed support; the fsmonitor daemon is not guaranteed to work with these filesystems and such use is considered experimental. By default, the socket is created in the `.git` directory, however, if the `.git` directory is on a network-mounted filesystem, it will be instead be created at `$HOME/.git-fsmonitor-*` unless `$HOME` itself is on a network-mounted filesystem in which case you must set the configuration variable `fsmonitor.socketDir` to the path of a directory on a Mac OS native filesystem in which to create the socket file. If none of the above directories (`.git`, `$HOME`, or `fsmonitor.socketDir`) is on a native Mac OS file filesystem the fsmonitor daemon will report an error that will cause the daemon and the currently running command to exit. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: fsmonitor.allowRemote By default, the fsmonitor daemon refuses to work against network-mounted repositories. Setting `fsmonitor.allowRemote` to `true` overrides this behavior. Only respected when `core.fsmonitor` is set to `true`. fsmonitor.socketDir This Mac OS-specific option, if set, specifies the directory in which to create the Unix domain socket used for communication between the fsmonitor daemon and various Git commands. The directory must reside on a native Mac OS filesystem. Only respected when `core.fsmonitor` is set to `true`. git git-ls-remote git-ls-remote ============= Name ---- git-ls-remote - List references in a remote repository Synopsis -------- ``` git ls-remote [--heads] [--tags] [--refs] [--upload-pack=<exec>] [-q | --quiet] [--exit-code] [--get-url] [--sort=<key>] [--symref] [<repository> [<refs>…​]] ``` Description ----------- Displays references available in a remote repository along with the associated commit IDs. Options ------- -h --heads -t --tags Limit to only refs/heads and refs/tags, respectively. These options are `not` mutually exclusive; when given both, references stored in refs/heads and refs/tags are displayed. Note that `git ls-remote -h` used without anything else on the command line gives help, consistent with other git subcommands. --refs Do not show peeled tags or pseudorefs like `HEAD` in the output. -q --quiet Do not print remote URL to stderr. --upload-pack=<exec> Specify the full path of `git-upload-pack` on the remote host. This allows listing references from repositories accessed via SSH and where the SSH daemon does not use the PATH configured by the user. --exit-code Exit with status "2" when no matching refs are found in the remote repository. Usually the command exits with status "0" to indicate it successfully talked with the remote repository, whether it found any matching refs. --get-url Expand the URL of the given remote repository taking into account any "url.<base>.insteadOf" config setting (See [git-config[1]](git-config)) and exit without talking to the remote. --symref In addition to the object pointed by it, show the underlying ref pointed by it when showing a symbolic ref. Currently, upload-pack only shows the symref HEAD, so it will be the only one shown by ls-remote. --sort=<key> Sort based on the key given. Prefix `-` to sort in descending order of the value. Supports "version:refname" or "v:refname" (tag names are treated as versions). The "version:refname" sort order can also be affected by the "versionsort.suffix" configuration variable. See [git-for-each-ref[1]](git-for-each-ref) for more sort options, but be aware keys like `committerdate` that require access to the objects themselves will not work for refs whose objects have not yet been fetched from the remote, and will give a `missing object` error. -o <option> --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. When multiple `--server-option=<option>` are given, they are all sent to the other side in the order listed on the command line. <repository> The "remote" repository to query. This parameter can be either a URL or the name of a remote (see the GIT URLS and REMOTES sections of [git-fetch[1]](git-fetch)). <refs>…​ When unspecified, all references, after filtering done with --heads and --tags, are shown. When <refs>…​ are specified, only references matching the given patterns are displayed. Examples -------- ``` $ git ls-remote --tags ./. d6602ec5194c87b0fc87103ca4d67251c76f233a refs/tags/v0.99 f25a265a342aed6041ab0cc484224d9ca54b6f41 refs/tags/v0.99.1 7ceca275d047c90c0c7d5afb13ab97efdf51bd6e refs/tags/v0.99.3 c5db5456ae3b0873fc659c19fafdde22313cc441 refs/tags/v0.99.2 0918385dbd9656cab0d1d81ba7453d49bbc16250 refs/tags/junio-gpg-pub $ git ls-remote http://www.kernel.org/pub/scm/git/git.git master seen rc 5fe978a5381f1fbad26a80e682ddd2a401966740 refs/heads/master c781a84b5204fb294c9ccc79f8b3baceeb32c061 refs/heads/seen $ git remote add korg http://www.kernel.org/pub/scm/git/git.git $ git ls-remote --tags korg v\* d6602ec5194c87b0fc87103ca4d67251c76f233a refs/tags/v0.99 f25a265a342aed6041ab0cc484224d9ca54b6f41 refs/tags/v0.99.1 c5db5456ae3b0873fc659c19fafdde22313cc441 refs/tags/v0.99.2 7ceca275d047c90c0c7d5afb13ab97efdf51bd6e refs/tags/v0.99.3 ``` See also -------- [git-check-ref-format[1]](git-check-ref-format).
programming_docs
git git-grep git-grep ======== Name ---- git-grep - Print lines matching a pattern Synopsis -------- ``` git grep [-a | --text] [-I] [--textconv] [-i | --ignore-case] [-w | --word-regexp] [-v | --invert-match] [-h|-H] [--full-name] [-E | --extended-regexp] [-G | --basic-regexp] [-P | --perl-regexp] [-F | --fixed-strings] [-n | --line-number] [--column] [-l | --files-with-matches] [-L | --files-without-match] [(-O | --open-files-in-pager) [<pager>]] [-z | --null] [ -o | --only-matching ] [-c | --count] [--all-match] [-q | --quiet] [--max-depth <depth>] [--[no-]recursive] [--color[=<when>] | --no-color] [--break] [--heading] [-p | --show-function] [-A <post-context>] [-B <pre-context>] [-C <context>] [-W | --function-context] [(-m | --max-count) <num>] [--threads <num>] [-f <file>] [-e] <pattern> [--and|--or|--not|(|)|-e <pattern>…​] [--recurse-submodules] [--parent-basename <basename>] [ [--[no-]exclude-standard] [--cached | --no-index | --untracked] | <tree>…​] [--] [<pathspec>…​] ``` Description ----------- Look for specified patterns in the tracked files in the work tree, blobs registered in the index file, or blobs in given tree objects. Patterns are lists of one or more search expressions separated by newline characters. An empty string as search expression matches all lines. Options ------- --cached Instead of searching tracked files in the working tree, search blobs registered in the index file. --no-index Search files in the current directory that is not managed by Git. --untracked In addition to searching in the tracked files in the working tree, search also in untracked files. --no-exclude-standard Also search in ignored files by not honoring the `.gitignore` mechanism. Only useful with `--untracked`. --exclude-standard Do not pay attention to ignored files specified via the `.gitignore` mechanism. Only useful when searching files in the current directory with `--no-index`. --recurse-submodules Recursively search in each submodule that is active and checked out in the repository. When used in combination with the <tree> option the prefix of all submodule output will be the name of the parent project’s <tree> object. This option has no effect if `--no-index` is given. -a --text Process binary files as if they were text. --textconv Honor textconv filter settings. --no-textconv Do not honor textconv filter settings. This is the default. -i --ignore-case Ignore case differences between the patterns and the files. -I Don’t match the pattern in binary files. --max-depth <depth> For each <pathspec> given on command line, descend at most <depth> levels of directories. A value of -1 means no limit. This option is ignored if <pathspec> contains active wildcards. In other words if "a\*" matches a directory named "a\*", "\*" is matched literally so --max-depth is still effective. -r --recursive Same as `--max-depth=-1`; this is the default. --no-recursive Same as `--max-depth=0`. -w --word-regexp Match the pattern only at word boundary (either begin at the beginning of a line, or preceded by a non-word character; end at the end of a line or followed by a non-word character). -v --invert-match Select non-matching lines. -h -H By default, the command shows the filename for each match. `-h` option is used to suppress this output. `-H` is there for completeness and does not do anything except it overrides `-h` given earlier on the command line. --full-name When run from a subdirectory, the command usually outputs paths relative to the current directory. This option forces paths to be output relative to the project top directory. -E --extended-regexp -G --basic-regexp Use POSIX extended/basic regexp for patterns. Default is to use basic regexp. -P --perl-regexp Use Perl-compatible regular expressions for patterns. Support for these types of regular expressions is an optional compile-time dependency. If Git wasn’t compiled with support for them providing this option will cause it to die. -F --fixed-strings Use fixed strings for patterns (don’t interpret pattern as a regex). -n --line-number Prefix the line number to matching lines. --column Prefix the 1-indexed byte-offset of the first match from the start of the matching line. -l --files-with-matches --name-only -L --files-without-match Instead of showing every matched line, show only the names of files that contain (or do not contain) matches. For better compatibility with `git diff`, `--name-only` is a synonym for `--files-with-matches`. -O[<pager>] --open-files-in-pager[=<pager>] Open the matching files in the pager (not the output of `grep`). If the pager happens to be "less" or "vi", and the user specified only one pattern, the first file is positioned at the first match automatically. The `pager` argument is optional; if specified, it must be stuck to the option without a space. If `pager` is unspecified, the default pager will be used (see `core.pager` in [git-config[1]](git-config)). -z --null Use \0 as the delimiter for pathnames in the output, and print them verbatim. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see [git-config[1]](git-config)). -o --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. -c --count Instead of showing every matched line, show the number of lines that match. --color[=<when>] Show colored matches. The value must be always (the default), never, or auto. --no-color Turn off match highlighting, even when the configuration file gives the default to color output. Same as `--color=never`. --break Print an empty line between matches from different files. --heading Show the filename above the matches in that file instead of at the start of each shown line. -p --show-function Show the preceding line that contains the function name of the match, unless the matching line is a function name itself. The name is determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). -<num> -C <num> --context <num> Show <num> leading and trailing lines, and place a line containing `--` between contiguous groups of matches. -A <num> --after-context <num> Show <num> trailing lines, and place a line containing `--` between contiguous groups of matches. -B <num> --before-context <num> Show <num> leading lines, and place a line containing `--` between contiguous groups of matches. -W --function-context Show the surrounding text from the previous line containing a function name up to the one before the next function name, effectively showing the whole function in which the match was found. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). -m <num> --max-count <num> Limit the amount of matches per file. When using the `-v` or `--invert-match` option, the search stops after the specified number of non-matches. A value of -1 will return unlimited results (the default). A value of 0 will exit immediately with a non-zero status. --threads <num> Number of grep worker threads to use. See `grep.threads` in `CONFIGURATION` for more information. -f <file> Read patterns from <file>, one per line. Passing the pattern via <file> allows for providing a search pattern containing a \0. Not all pattern types support patterns containing \0. Git will error out if a given pattern type can’t support such a pattern. The `--perl-regexp` pattern type when compiled against the PCRE v2 backend has the widest support for these types of patterns. In versions of Git before 2.23.0 patterns containing \0 would be silently considered fixed. This was never documented, there were also odd and undocumented interactions between e.g. non-ASCII patterns containing \0 and `--ignore-case`. In future versions we may learn to support patterns containing \0 for more search backends, until then we’ll die when the pattern type in question doesn’t support them. -e The next parameter is the pattern. This option has to be used for patterns starting with `-` and should be used in scripts passing user input to grep. Multiple patterns are combined by `or`. --and --or --not ( …​ ) Specify how multiple patterns are combined using Boolean expressions. `--or` is the default operator. `--and` has higher precedence than `--or`. `-e` has to be used for all patterns. --all-match When giving multiple pattern expressions combined with `--or`, this flag is specified to limit the match to files that have lines to match all of them. -q --quiet Do not output matched lines; instead, exit with status 0 when there is a match and with non-zero status when there isn’t. <tree>…​ Instead of searching tracked files in the working tree, search blobs in the given trees. -- Signals the end of options; the rest of the parameters are <pathspec> limiters. <pathspec>…​ If given, limit the search to paths matching at least one pattern. Both leading paths match and glob(7) patterns are supported. For more details about the <pathspec> syntax, see the `pathspec` entry in [gitglossary[7]](gitglossary). Examples -------- `git grep 'time_t' -- '*.[ch]'` Looks for `time_t` in all tracked .c and .h files in the working directory and its subdirectories. `git grep -e '#define' --and \( -e MAX_PATH -e PATH_MAX \)` Looks for a line that has `#define` and either `MAX_PATH` or `PATH_MAX`. `git grep --all-match -e NODE -e Unexpected` Looks for a line that has `NODE` or `Unexpected` in files that have lines that match both. `git grep solution -- :^Documentation` Looks for `solution`, excluding files in `Documentation`. Notes on threads ---------------- The `--threads` option (and the grep.threads configuration) will be ignored when `--open-files-in-pager` is used, forcing a single-threaded execution. When grepping the object store (with `--cached` or giving tree objects), running with multiple threads might perform slower than single threaded if `--textconv` is given and there’re too many text conversions. So if you experience low performance in this case, it might be desirable to use `--threads=1`. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: grep.lineNumber If set to true, enable `-n` option by default. grep.column If set to true, enable the `--column` option by default. grep.patternType Set the default matching behavior. Using a value of `basic`, `extended`, `fixed`, or `perl` will enable the `--basic-regexp`, `--extended-regexp`, `--fixed-strings`, or `--perl-regexp` option accordingly, while the value `default` will use the `grep.extendedRegexp` option to choose between `basic` and `extended`. grep.extendedRegexp If set to true, enable `--extended-regexp` option by default. This option is ignored when the `grep.patternType` option is set to a value other than `default`. grep.threads Number of grep worker threads to use. If unset (or set to 0), Git will use as many threads as the number of logical cores available. grep.fullName If set to true, enable `--full-name` option by default. grep.fallbackToNoIndex If set to true, fall back to git grep --no-index if git grep is executed outside of a git repository. Defaults to false. git gitmodules gitmodules ========== Name ---- gitmodules - Defining submodule properties Synopsis -------- $GIT\_WORK\_TREE/.gitmodules Description ----------- The `.gitmodules` file, located in the top-level directory of a Git working tree, is a text file with a syntax matching the requirements of [git-config[1]](git-config). The file contains one subsection per submodule, and the subsection value is the name of the submodule. The name is set to the path where the submodule has been added unless it was customized with the `--name` option of `git submodule add`. Each submodule section also contains the following required keys: submodule.<name>.path Defines the path, relative to the top-level directory of the Git working tree, where the submodule is expected to be checked out. The path name must not end with a `/`. All submodule paths must be unique within the `.gitmodules` file. submodule.<name>.url Defines a URL from which the submodule repository can be cloned. This may be either an absolute URL ready to be passed to [git-clone[1]](git-clone) or (if it begins with `./` or `../`) a location relative to the superproject’s origin repository. In addition, there are a number of optional keys: submodule.<name>.update Defines the default update procedure for the named submodule, i.e. how the submodule is updated by the `git submodule update` command in the superproject. This is only used by `git submodule init` to initialize the configuration variable of the same name. Allowed values here are `checkout`, `rebase`, `merge` or `none`. See description of `update` command in [git-submodule[1]](git-submodule) for their meaning. For security reasons, the `!command` form is not accepted here. submodule.<name>.branch A remote branch name for tracking updates in the upstream submodule. If the option is not specified, it defaults to the remote `HEAD`. A special value of `.` is used to indicate that the name of the branch in the submodule should be the same name as the current branch in the current repository. See the `--remote` documentation in [git-submodule[1]](git-submodule) for details. submodule.<name>.fetchRecurseSubmodules This option can be used to control recursive fetching of this submodule. If this option is also present in the submodule’s entry in `.git/config` of the superproject, the setting there will override the one found in `.gitmodules`. Both settings can be overridden on the command line by using the `--[no-]recurse-submodules` option to `git fetch` and `git pull`. submodule.<name>.ignore Defines under what circumstances `git status` and the diff family show a submodule as modified. The following values are supported: all The submodule will never be considered modified (but will nonetheless show up in the output of status and commit when it has been staged). dirty All changes to the submodule’s work tree will be ignored, only committed differences between the `HEAD` of the submodule and its recorded state in the superproject are taken into account. untracked Only untracked files in submodules will be ignored. Committed differences and modifications to tracked files will show up. none No modifications to submodules are ignored, all of committed differences, and modifications to tracked and untracked files are shown. This is the default option. If this option is also present in the submodule’s entry in `.git/config` of the superproject, the setting there will override the one found in `.gitmodules`. Both settings can be overridden on the command line by using the `--ignore-submodules` option. The `git submodule` commands are not affected by this setting. submodule.<name>.shallow When set to true, a clone of this submodule will be performed as a shallow clone (with a history depth of 1) unless the user explicitly asks for a non-shallow clone. Notes ----- Git does not allow the `.gitmodules` file within a working tree to be a symbolic link, and will refuse to check out such a tree entry. This keeps behavior consistent when the file is accessed from the index or a tree versus from the filesystem, and helps Git reliably enforce security checks of the file contents. Examples -------- Consider the following `.gitmodules` file: ``` [submodule "libfoo"] path = include/foo url = git://foo.com/git/lib.git [submodule "libbar"] path = include/bar url = git://bar.com/git/lib.git ``` This defines two submodules, `libfoo` and `libbar`. These are expected to be checked out in the paths `include/foo` and `include/bar`, and for both submodules a URL is specified which can be used for cloning the submodules. See also -------- [git-submodule[1]](git-submodule), [gitsubmodules[7]](gitsubmodules), [git-config[1]](git-config) git gitignore gitignore ========= Name ---- gitignore - Specifies intentionally untracked files to ignore Synopsis -------- $XDG\_CONFIG\_HOME/git/ignore, $GIT\_DIR/info/exclude, .gitignore Description ----------- A `gitignore` file specifies intentionally untracked files that Git should ignore. Files already tracked by Git are not affected; see the NOTES below for details. Each line in a `gitignore` file specifies a pattern. When deciding whether to ignore a path, Git normally checks `gitignore` patterns from multiple sources, with the following order of precedence, from highest to lowest (within one level of precedence, the last matching pattern decides the outcome): * Patterns read from the command line for those commands that support them. * Patterns read from a `.gitignore` file in the same directory as the path, or in any parent directory (up to the top-level of the working tree), with patterns in the higher level files being overridden by those in lower level files down to the directory containing the file. These patterns match relative to the location of the `.gitignore` file. A project normally includes such `.gitignore` files in its repository, containing patterns for files generated as part of the project build. * Patterns read from `$GIT_DIR/info/exclude`. * Patterns read from the file specified by the configuration variable `core.excludesFile`. Which file to place a pattern in depends on how the pattern is meant to be used. * Patterns which should be version-controlled and distributed to other repositories via clone (i.e., files that all developers will want to ignore) should go into a `.gitignore` file. * Patterns which are specific to a particular repository but which do not need to be shared with other related repositories (e.g., auxiliary files that live inside the repository but are specific to one user’s workflow) should go into the `$GIT_DIR/info/exclude` file. * Patterns which a user wants Git to ignore in all situations (e.g., backup or temporary files generated by the user’s editor of choice) generally go into a file specified by `core.excludesFile` in the user’s `~/.gitconfig`. Its default value is $XDG\_CONFIG\_HOME/git/ignore. If $XDG\_CONFIG\_HOME is either not set or empty, $HOME/.config/git/ignore is used instead. The underlying Git plumbing tools, such as `git ls-files` and `git read-tree`, read `gitignore` patterns specified by command-line options, or from files specified by command-line options. Higher-level Git tools, such as `git status` and `git add`, use patterns from the sources specified above. Pattern format -------------- * A blank line matches no files, so it can serve as a separator for readability. * A line starting with # serves as a comment. Put a backslash ("`\`") in front of the first hash for patterns that begin with a hash. * Trailing spaces are ignored unless they are quoted with backslash ("`\`"). * An optional prefix "`!`" which negates the pattern; any matching file excluded by a previous pattern will become included again. It is not possible to re-include a file if a parent directory of that file is excluded. Git doesn’t list excluded directories for performance reasons, so any patterns on contained files have no effect, no matter where they are defined. Put a backslash ("`\`") in front of the first "`!`" for patterns that begin with a literal "`!`", for example, "`\!important!.txt`". * The slash `/` is used as the directory separator. Separators may occur at the beginning, middle or end of the `.gitignore` search pattern. * If there is a separator at the beginning or middle (or both) of the pattern, then the pattern is relative to the directory level of the particular `.gitignore` file itself. Otherwise the pattern may also match at any level below the `.gitignore` level. * If there is a separator at the end of the pattern then the pattern will only match directories, otherwise the pattern can match both files and directories. * For example, a pattern `doc/frotz/` matches `doc/frotz` directory, but not `a/doc/frotz` directory; however `frotz/` matches `frotz` and `a/frotz` that is a directory (all paths are relative from the `.gitignore` file). * An asterisk "`*`" matches anything except a slash. The character "`?`" matches any one character except "`/`". The range notation, e.g. `[a-zA-Z]`, can be used to match one of the characters in a range. See fnmatch(3) and the FNM\_PATHNAME flag for a more detailed description. Two consecutive asterisks ("`**`") in patterns matched against full pathname may have special meaning: * A leading "`**`" followed by a slash means match in all directories. For example, "`**/foo`" matches file or directory "`foo`" anywhere, the same as pattern "`foo`". "`**/foo/bar`" matches file or directory "`bar`" anywhere that is directly under directory "`foo`". * A trailing "`/**`" matches everything inside. For example, "`abc/**`" matches all files inside directory "`abc`", relative to the location of the `.gitignore` file, with infinite depth. * A slash followed by two consecutive asterisks then a slash matches zero or more directories. For example, "`a/**/b`" matches "`a/b`", "`a/x/b`", "`a/x/y/b`" and so on. * Other consecutive asterisks are considered regular asterisks and will match according to the previous rules. Configuration ------------- The optional configuration variable `core.excludesFile` indicates a path to a file containing patterns of file names to exclude, similar to `$GIT_DIR/info/exclude`. Patterns in the exclude file are used in addition to those in `$GIT_DIR/info/exclude`. Notes ----- The purpose of gitignore files is to ensure that certain files not tracked by Git remain untracked. To stop tracking a file that is currently tracked, use `git rm --cached`. Git does not follow symbolic links when accessing a `.gitignore` file in the working tree. This keeps behavior consistent when the file is accessed from the index or a tree versus from the filesystem. Examples -------- * The pattern `hello.*` matches any file or directory whose name begins with `hello.`. If one wants to restrict this only to the directory and not in its subdirectories, one can prepend the pattern with a slash, i.e. `/hello.*`; the pattern now matches `hello.txt`, `hello.c` but not `a/hello.java`. * The pattern `foo/` will match a directory `foo` and paths underneath it, but will not match a regular file or a symbolic link `foo` (this is consistent with the way how pathspec works in general in Git) * The pattern `doc/frotz` and `/doc/frotz` have the same effect in any `.gitignore` file. In other words, a leading slash is not relevant if there is already a middle slash in the pattern. * The pattern "foo/\*", matches "foo/test.json" (a regular file), "foo/bar" (a directory), but it does not match "foo/bar/hello.c" (a regular file), as the asterisk in the pattern does not match "bar/hello.c" which has a slash in it. ``` $ git status [...] # Untracked files: [...] # Documentation/foo.html # Documentation/gitignore.html # file.o # lib.a # src/internal.o [...] $ cat .git/info/exclude # ignore objects and archives, anywhere in the tree. *.[oa] $ cat Documentation/.gitignore # ignore generated html files, *.html # except foo.html which is maintained by hand !foo.html $ git status [...] # Untracked files: [...] # Documentation/foo.html [...] ``` Another example: ``` $ cat .gitignore vmlinux* $ ls arch/foo/kernel/vm* arch/foo/kernel/vmlinux.lds.S $ echo '!/vmlinux*' >arch/foo/kernel/.gitignore ``` The second .gitignore prevents Git from ignoring `arch/foo/kernel/vmlinux.lds.S`. Example to exclude everything except a specific directory `foo/bar` (note the `/*` - without the slash, the wildcard would also exclude everything within `foo/bar`): ``` $ cat .gitignore # exclude everything except directory foo/bar /* !/foo /foo/* !/foo/bar ``` See also -------- [git-rm[1]](git-rm), [gitrepository-layout[5]](gitrepository-layout), [git-check-ignore[1]](git-check-ignore)
programming_docs
git git-am git-am ====== Name ---- git-am - Apply a series of patches from a mailbox Synopsis -------- ``` git am [--signoff] [--keep] [--[no-]keep-cr] [--[no-]utf8] [--[no-]3way] [--interactive] [--committer-date-is-author-date] [--ignore-date] [--ignore-space-change | --ignore-whitespace] [--whitespace=<option>] [-C<n>] [-p<n>] [--directory=<dir>] [--exclude=<path>] [--include=<path>] [--reject] [-q | --quiet] [--[no-]scissors] [-S[<keyid>]] [--patch-format=<format>] [--quoted-cr=<action>] [--empty=(stop|drop|keep)] [(<mbox> | <Maildir>)…​] git am (--continue | --skip | --abort | --quit | --show-current-patch[=(diff|raw)] | --allow-empty) ``` Description ----------- Splits mail messages in a mailbox into commit log message, authorship information and patches, and applies them to the current branch. Options ------- (<mbox>|<Maildir>)…​ The list of mailbox files to read patches from. If you do not supply this argument, the command reads from the standard input. If you supply directories, they will be treated as Maildirs. -s --signoff Add a `Signed-off-by` trailer to the commit message, using the committer identity of yourself. See the signoff option in [git-commit[1]](git-commit) for more information. -k --keep Pass `-k` flag to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)). --keep-non-patch Pass `-b` flag to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)). --[no-]keep-cr With `--keep-cr`, call `git mailsplit` (see [git-mailsplit[1]](git-mailsplit)) with the same option, to prevent it from stripping CR at the end of lines. `am.keepcr` configuration variable can be used to specify the default behaviour. `--no-keep-cr` is useful to override `am.keepcr`. -c --scissors Remove everything in body before a scissors line (see [git-mailinfo[1]](git-mailinfo)). Can be activated by default using the `mailinfo.scissors` configuration variable. --no-scissors Ignore scissors lines (see [git-mailinfo[1]](git-mailinfo)). --quoted-cr=<action> This flag will be passed down to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)). --empty=(stop|drop|keep) By default, or when the option is set to `stop`, the command errors out on an input e-mail message lacking a patch and stops into the middle of the current am session. When this option is set to `drop`, skip such an e-mail message instead. When this option is set to `keep`, create an empty commit, recording the contents of the e-mail message as its log. -m --message-id Pass the `-m` flag to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)), so that the Message-ID header is added to the commit message. The `am.messageid` configuration variable can be used to specify the default behaviour. --no-message-id Do not add the Message-ID header to the commit message. `no-message-id` is useful to override `am.messageid`. -q --quiet Be quiet. Only print error messages. -u --utf8 Pass `-u` flag to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)). The proposed commit log message taken from the e-mail is re-coded into UTF-8 encoding (configuration variable `i18n.commitEncoding` can be used to specify project’s preferred encoding if it is not UTF-8). This was optional in prior versions of git, but now it is the default. You can use `--no-utf8` to override this. --no-utf8 Pass `-n` flag to `git mailinfo` (see [git-mailinfo[1]](git-mailinfo)). -3 --3way --no-3way When the patch does not apply cleanly, fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally. `--no-3way` can be used to override am.threeWay configuration variable. For more information, see am.threeWay in [git-config[1]](git-config). --rerere-autoupdate --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. `--no-rerere-autoupdate` is a good way to double-check what `rerere` did and catch potential mismerges, before committing the result to the index with a separate `git add`. --ignore-space-change --ignore-whitespace --whitespace=<option> -C<n> -p<n> --directory=<dir> --exclude=<path> --include=<path> --reject These flags are passed to the `git apply` (see [git-apply[1]](git-apply)) program that applies the patch. --patch-format By default the command will try to detect the patch format automatically. This option allows the user to bypass the automatic detection and specify the patch format that the patch(es) should be interpreted as. Valid formats are mbox, mboxrd, stgit, stgit-series and hg. -i --interactive Run interactively. --committer-date-is-author-date By default the command records the date from the e-mail message as the commit author date, and uses the time of commit creation as the committer date. This allows the user to lie about the committer date by using the same value as the author date. --ignore-date By default the command records the date from the e-mail message as the commit author date, and uses the time of commit creation as the committer date. This allows the user to lie about the author date by using the same value as the committer date. --skip Skip the current patch. This is only meaningful when restarting an aborted patch. -S[<keyid>] --gpg-sign[=<keyid>] --no-gpg-sign GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. `--no-gpg-sign` is useful to countermand both `commit.gpgSign` configuration variable, and earlier `--gpg-sign`. --continue -r --resolved After a patch failure (e.g. attempting to apply conflicting patch), the user has applied it by hand and the index file stores the result of the application. Make a commit using the authorship and commit log extracted from the e-mail message and the current index file, and continue. --resolvemsg=<msg> When a patch failure occurs, <msg> will be printed to the screen before exiting. This overrides the standard message informing you to use `--continue` or `--skip` to handle the failure. This is solely for internal use between `git rebase` and `git am`. --abort Restore the original branch and abort the patching operation. Revert contents of files involved in the am operation to their pre-am state. --quit Abort the patching operation but keep HEAD and the index untouched. --show-current-patch[=(diff|raw)] Show the message at which `git am` has stopped due to conflicts. If `raw` is specified, show the raw contents of the e-mail message; if `diff`, show the diff portion only. Defaults to `raw`. --allow-empty After a patch failure on an input e-mail message lacking a patch, create an empty commit with the contents of the e-mail message as its log message. Discussion ---------- The commit author name is taken from the "From: " line of the message, and commit author date is taken from the "Date: " line of the message. The "Subject: " line is used as the title of the commit, after stripping common prefix "[PATCH <anything>]". The "Subject: " line is supposed to concisely describe what the commit is about in one line of text. "From: ", "Date: ", and "Subject: " lines starting the body override the respective commit author name and title values taken from the headers. The commit message is formed by the title taken from the "Subject: ", a blank line and the body of the message up to where the patch begins. Excess whitespace at the end of each line is automatically stripped. The patch is expected to be inline, directly following the message. Any line that is of the form: * three-dashes and end-of-line, or * a line that begins with "diff -", or * a line that begins with "Index: " is taken as the beginning of a patch, and the commit log message is terminated before the first occurrence of such a line. When initially invoking `git am`, you give it the names of the mailboxes to process. Upon seeing the first patch that does not apply, it aborts in the middle. You can recover from this in one of two ways: 1. skip the current patch by re-running the command with the `--skip` option. 2. hand resolve the conflict in the working directory, and update the index file to bring it into a state that the patch should have produced. Then run the command with the `--continue` option. The command refuses to process new mailboxes until the current operation is finished, so if you decide to start over from scratch, run `git am --abort` before running the command with mailbox names. Before any patches are applied, ORIG\_HEAD is set to the tip of the current branch. This is useful if you have problems with multiple commits, like running `git am` on the wrong branch or an error in the commits that is more easily fixed by changing the mailbox (e.g. errors in the "From:" lines). Hooks ----- This command can run `applypatch-msg`, `pre-applypatch`, and `post-applypatch` hooks. See [githooks[5]](githooks) for more information. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: am.keepcr If true, git-am will call git-mailsplit for patches in mbox format with parameter `--keep-cr`. In this case git-mailsplit will not remove `\r` from lines ending with `\r\n`. Can be overridden by giving `--no-keep-cr` from the command line. See [git-am[1]](git-am), [git-mailsplit[1]](git-mailsplit). am.threeWay By default, `git am` will fail if the patch does not apply cleanly. When set to true, this setting tells `git am` to fall back on 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally (equivalent to giving the `--3way` option from the command line). Defaults to `false`. See [git-am[1]](git-am). See also -------- [git-apply[1]](git-apply). git git-check-attr git-check-attr ============== Name ---- git-check-attr - Display gitattributes information Synopsis -------- ``` git check-attr [-a | --all | <attr>…​] [--] <pathname>…​ git check-attr --stdin [-z] [-a | --all | <attr>…​] ``` Description ----------- For every pathname, this command will list if each attribute is `unspecified`, `set`, or `unset` as a gitattribute on that pathname. Options ------- -a, --all List all attributes that are associated with the specified paths. If this option is used, then `unspecified` attributes will not be included in the output. --cached Consider `.gitattributes` in the index only, ignoring the working tree. --stdin Read pathnames from the standard input, one per line, instead of from the command-line. -z The output format is modified to be machine-parsable. If `--stdin` is also given, input paths are separated with a NUL character instead of a linefeed character. -- Interpret all preceding arguments as attributes and all following arguments as path names. If none of `--stdin`, `--all`, or `--` is used, the first argument will be treated as an attribute and the rest of the arguments as pathnames. Output ------ The output is of the form: <path> COLON SP <attribute> COLON SP <info> LF unless `-z` is in effect, in which case NUL is used as delimiter: <path> NUL <attribute> NUL <info> NUL <path> is the path of a file being queried, <attribute> is an attribute being queried and <info> can be either: *unspecified* when the attribute is not defined for the path. *unset* when the attribute is defined as false. *set* when the attribute is defined as true. <value> when a value has been assigned to the attribute. Buffering happens as documented under the `GIT_FLUSH` option in [git[1]](git). The caller is responsible for avoiding deadlocks caused by overfilling an input buffer or reading from an empty output buffer. Examples -------- In the examples, the following `.gitattributes` file is used: ``` *.java diff=java -crlf myAttr NoMyAttr.java !myAttr README caveat=unspecified ``` * Listing a single attribute: ``` $ git check-attr diff org/example/MyClass.java org/example/MyClass.java: diff: java ``` * Listing multiple attributes for a file: ``` $ git check-attr crlf diff myAttr -- org/example/MyClass.java org/example/MyClass.java: crlf: unset org/example/MyClass.java: diff: java org/example/MyClass.java: myAttr: set ``` * Listing all attributes for a file: ``` $ git check-attr --all -- org/example/MyClass.java org/example/MyClass.java: diff: java org/example/MyClass.java: myAttr: set ``` * Listing an attribute for multiple files: ``` $ git check-attr myAttr -- org/example/MyClass.java org/example/NoMyAttr.java org/example/MyClass.java: myAttr: set org/example/NoMyAttr.java: myAttr: unspecified ``` * Not all values are equally unambiguous: ``` $ git check-attr caveat README README: caveat: unspecified ``` See also -------- [gitattributes[5]](gitattributes). git git-difftool git-difftool ============ Name ---- git-difftool - Show changes using common diff tools Synopsis -------- ``` git difftool [<options>] [<commit> [<commit>]] [--] [<path>…​] ``` Description ----------- `git difftool` is a Git command that allows you to compare and edit files between revisions using common diff tools. `git difftool` is a frontend to `git diff` and accepts the same options and arguments. See [git-diff[1]](git-diff). Options ------- -d --dir-diff Copy the modified files to a temporary location and perform a directory diff on them. This mode never prompts before launching the diff tool. -y --no-prompt Do not prompt before launching a diff tool. --prompt Prompt before each invocation of the diff tool. This is the default behaviour; the option is provided to override any configuration settings. --rotate-to=<file> Start showing the diff for the given path, the paths before it will move to end and output. --skip-to=<file> Start showing the diff for the given path, skipping all the paths before it. -t <tool> --tool=<tool> Use the diff tool specified by <tool>. Valid values include emerge, kompare, meld, and vimdiff. Run `git difftool --tool-help` for the list of valid <tool> settings. If a diff tool is not specified, `git difftool` will use the configuration variable `diff.tool`. If the configuration variable `diff.tool` is not set, `git difftool` will pick a suitable default. You can explicitly provide a full path to the tool by setting the configuration variable `difftool.<tool>.path`. For example, you can configure the absolute path to kdiff3 by setting `difftool.kdiff3.path`. Otherwise, `git difftool` assumes the tool is available in PATH. Instead of running one of the known diff tools, `git difftool` can be customized to run an alternative program by specifying the command line to invoke in a configuration variable `difftool.<tool>.cmd`. When `git difftool` is invoked with this tool (either through the `-t` or `--tool` option or the `diff.tool` configuration variable) the configured command line will be invoked with the following variables available: `$LOCAL` is set to the name of the temporary file containing the contents of the diff pre-image and `$REMOTE` is set to the name of the temporary file containing the contents of the diff post-image. `$MERGED` is the name of the file which is being compared. `$BASE` is provided for compatibility with custom merge tool commands and has the same value as `$MERGED`. --tool-help Print a list of diff tools that may be used with `--tool`. --[no-]symlinks `git difftool`'s default behavior is create symlinks to the working tree when run in `--dir-diff` mode and the right-hand side of the comparison yields the same content as the file in the working tree. Specifying `--no-symlinks` instructs `git difftool` to create copies instead. `--no-symlinks` is the default on Windows. -x <command> --extcmd=<command> Specify a custom command for viewing diffs. `git-difftool` ignores the configured defaults and runs `$command $LOCAL $REMOTE` when this option is specified. Additionally, `$BASE` is set in the environment. -g --[no-]gui When `git-difftool` is invoked with the `-g` or `--gui` option the default diff tool will be read from the configured `diff.guitool` variable instead of `diff.tool`. The `--no-gui` option can be used to override this setting. If `diff.guitool` is not set, we will fallback in the order of `merge.guitool`, `diff.tool`, `merge.tool` until a tool is found. --[no-]trust-exit-code `git-difftool` invokes a diff tool individually on each file. Errors reported by the diff tool are ignored by default. Use `--trust-exit-code` to make `git-difftool` exit when an invoked diff tool returns a non-zero exit code. `git-difftool` will forward the exit code of the invoked tool when `--trust-exit-code` is used. See [git-diff[1]](git-diff) for the full list of supported options. Configuration ------------- `git difftool` falls back to `git mergetool` config variables when the difftool equivalents have not been defined. Everything above this line in this section isn’t included from the [git-config[1]](git-config) documentation. The content that follows is the same as what’s found there: diff.tool Controls which diff tool is used by [git-difftool[1]](git-difftool). This variable overrides the value configured in `merge.tool`. The list below shows the valid built-in values. Any other value is treated as a custom diff tool and requires that a corresponding difftool.<tool>.cmd variable is defined. diff.guitool Controls which diff tool is used by [git-difftool[1]](git-difftool) when the -g/--gui flag is specified. This variable overrides the value configured in `merge.guitool`. The list below shows the valid built-in values. Any other value is treated as a custom diff tool and requires that a corresponding difftool.<guitool>.cmd variable is defined. difftool.<tool>.cmd Specify the command to invoke the specified diff tool. The specified command is evaluated in shell with the following variables available: `LOCAL` is set to the name of the temporary file containing the contents of the diff pre-image and `REMOTE` is set to the name of the temporary file containing the contents of the diff post-image. See the `--tool=<tool>` option in [git-difftool[1]](git-difftool) for more details. difftool.<tool>.path Override the path for the given tool. This is useful in case your tool is not in the PATH. difftool.trustExitCode Exit difftool if the invoked diff tool returns a non-zero exit status. See the `--trust-exit-code` option in [git-difftool[1]](git-difftool) for more details. difftool.prompt Prompt before each invocation of the diff tool. See also -------- [git-diff[1]](git-diff) Show changes between commits, commit and working tree, etc [git-mergetool[1]](git-mergetool) Run merge conflict resolution tools to resolve merge conflicts [git-config[1]](git-config) Get and set repository or global options git git-bugreport git-bugreport ============= Name ---- git-bugreport - Collect information for user to file a bug report Synopsis -------- ``` git bugreport [(-o | --output-directory) <path>] [(-s | --suffix) <format>] [--diagnose[=<mode>]] ``` Description ----------- Captures information about the user’s machine, Git client, and repository state, as well as a form requesting information about the behavior the user observed, into a single text file which the user can then share, for example to the Git mailing list, in order to report an observed bug. The following information is requested from the user: * Reproduction steps * Expected behavior * Actual behavior The following information is captured automatically: * `git version --build-options` * uname sysname, release, version, and machine strings * Compiler-specific info string * A list of enabled hooks * $SHELL Additional information may be gathered into a separate zip archive using the `--diagnose` option, and can be attached alongside the bugreport document to provide additional context to readers. This tool is invoked via the typical Git setup process, which means that in some cases, it might not be able to launch - for example, if a relevant config file is unreadable. In this kind of scenario, it may be helpful to manually gather the kind of information listed above when manually asking for help. Options ------- -o <path> --output-directory <path> Place the resulting bug report file in `<path>` instead of the current directory. -s <format> --suffix <format> Specify an alternate suffix for the bugreport name, to create a file named `git-bugreport-<formatted suffix>`. This should take the form of a strftime(3) format string; the current local time will be used. --no-diagnose --diagnose[=<mode>] Create a zip archive of supplemental information about the user’s machine, Git client, and repository state. The archive is written to the same output directory as the bug report and is named `git-diagnostics-<formatted suffix>`. Without `mode` specified, the diagnostic archive will contain the default set of statistics reported by `git diagnose`. An optional `mode` value may be specified to change which information is included in the archive. See [git-diagnose[1]](git-diagnose) for the list of valid values for `mode` and details about their usage.
programming_docs
git git-http-push git-http-push ============= Name ---- git-http-push - Push objects over HTTP/DAV to another repository Synopsis -------- ``` git http-push [--all] [--dry-run] [--force] [--verbose] <URL> <ref> [<ref>…​] ``` Description ----------- Sends missing objects to remote repository, and updates the remote branch. **NOTE**: This command is temporarily disabled if your libcurl is older than 7.16, as the combination has been reported not to work and sometimes corrupts repository. Options ------- --all Do not assume that the remote repository is complete in its current state, and verify all objects in the entire local ref’s history exist in the remote repository. --force Usually, the command refuses to update a remote ref that is not an ancestor of the local ref used to overwrite it. This flag disables the check. What this means is that the remote repository can lose commits; use it with care. --dry-run Do everything except actually send the updates. --verbose Report the list of objects being walked locally and the list of objects successfully sent to the remote repository. -d -D Remove <ref> from remote repository. The specified branch cannot be the remote HEAD. If -d is specified the following other conditions must also be met: * Remote HEAD must resolve to an object that exists locally * Specified branch resolves to an object that exists locally * Specified branch is an ancestor of the remote HEAD <ref>…​ The remote refs to update. Specifying the refs ------------------- A `<ref>` specification can be either a single pattern, or a pair of such patterns separated by a colon ":" (this means that a ref name cannot have a colon in it). A single pattern `<name>` is just a shorthand for `<name>:<name>`. Each pattern pair `<src>:<dst>` consists of the source side (before the colon) and the destination side (after the colon). The ref to be pushed is determined by finding a match that matches the source side, and where it is pushed is determined by using the destination side. * It is an error if `<src>` does not match exactly one of the local refs. * If `<dst>` does not match any remote ref, either + it has to start with "refs/"; <dst> is used as the destination literally in this case. + <src> == <dst> and the ref that matched the <src> must not exist in the set of remote refs; the ref matched <src> locally is used as the name of the destination. Without `--force`, the <src> ref is stored at the remote only if <dst> does not exist, or <dst> is a proper subset (i.e. an ancestor) of <src>. This check, known as "fast-forward check", is performed in order to avoid accidentally overwriting the remote ref and lose other peoples' commits from there. With `--force`, the fast-forward check is disabled for all refs. Optionally, a <ref> parameter can be prefixed with a plus `+` sign to disable the fast-forward check only on that ref. git git-submodule git-submodule ============= Name ---- git-submodule - Initialize, update or inspect submodules Synopsis -------- ``` git submodule [--quiet] [--cached] git submodule [--quiet] add [<options>] [--] <repository> [<path>] git submodule [--quiet] status [--cached] [--recursive] [--] [<path>…​] git submodule [--quiet] init [--] [<path>…​] git submodule [--quiet] deinit [-f|--force] (--all|[--] <path>…​) git submodule [--quiet] update [<options>] [--] [<path>…​] git submodule [--quiet] set-branch [<options>] [--] <path> git submodule [--quiet] set-url [--] <path> <newurl> git submodule [--quiet] summary [<options>] [--] [<path>…​] git submodule [--quiet] foreach [--recursive] <command> git submodule [--quiet] sync [--recursive] [--] [<path>…​] git submodule [--quiet] absorbgitdirs [--] [<path>…​] ``` Description ----------- Inspects, updates and manages submodules. For more information about submodules, see [gitsubmodules[7]](gitsubmodules). Commands -------- With no arguments, shows the status of existing submodules. Several subcommands are available to perform operations on the submodules. add [-b <branch>] [-f|--force] [--name <name>] [--reference <repository>] [--depth <depth>] [--] <repository> [<path>] Add the given repository as a submodule at the given path to the changeset to be committed next to the current project: the current project is termed the "superproject". <repository> is the URL of the new submodule’s origin repository. This may be either an absolute URL, or (if it begins with ./ or ../), the location relative to the superproject’s default remote repository (Please note that to specify a repository `foo.git` which is located right next to a superproject `bar.git`, you’ll have to use `../foo.git` instead of `./foo.git` - as one might expect when following the rules for relative URLs - because the evaluation of relative URLs in Git is identical to that of relative directories). The default remote is the remote of the remote-tracking branch of the current branch. If no such remote-tracking branch exists or the HEAD is detached, "origin" is assumed to be the default remote. If the superproject doesn’t have a default remote configured the superproject is its own authoritative upstream and the current working directory is used instead. The optional argument <path> is the relative location for the cloned submodule to exist in the superproject. If <path> is not given, the canonical part of the source repository is used ("repo" for "/path/to/repo.git" and "foo" for "host.xz:foo/.git"). If <path> exists and is already a valid Git repository, then it is staged for commit without cloning. The <path> is also used as the submodule’s logical name in its configuration entries unless `--name` is used to specify a logical name. The given URL is recorded into `.gitmodules` for use by subsequent users cloning the superproject. If the URL is given relative to the superproject’s repository, the presumption is the superproject and submodule repositories will be kept together in the same relative location, and only the superproject’s URL needs to be provided. git-submodule will correctly locate the submodule using the relative URL in `.gitmodules`. status [--cached] [--recursive] [--] [<path>…​] Show the status of the submodules. This will print the SHA-1 of the currently checked out commit for each submodule, along with the submodule path and the output of `git describe` for the SHA-1. Each SHA-1 will possibly be prefixed with `-` if the submodule is not initialized, `+` if the currently checked out submodule commit does not match the SHA-1 found in the index of the containing repository and `U` if the submodule has merge conflicts. If `--cached` is specified, this command will instead print the SHA-1 recorded in the superproject for each submodule. If `--recursive` is specified, this command will recurse into nested submodules, and show their status as well. If you are only interested in changes of the currently initialized submodules with respect to the commit recorded in the index or the HEAD, [git-status[1]](git-status) and [git-diff[1]](git-diff) will provide that information too (and can also report changes to a submodule’s work tree). init [--] [<path>…​] Initialize the submodules recorded in the index (which were added and committed elsewhere) by setting `submodule.$name.url` in .git/config. It uses the same setting from `.gitmodules` as a template. If the URL is relative, it will be resolved using the default remote. If there is no default remote, the current repository will be assumed to be upstream. Optional <path> arguments limit which submodules will be initialized. If no path is specified and submodule.active has been configured, submodules configured to be active will be initialized, otherwise all submodules are initialized. When present, it will also copy the value of `submodule.$name.update`. This command does not alter existing information in .git/config. You can then customize the submodule clone URLs in .git/config for your local setup and proceed to `git submodule update`; you can also just use `git submodule update --init` without the explicit `init` step if you do not intend to customize any submodule locations. See the add subcommand for the definition of default remote. deinit [-f|--force] (--all|[--] <path>…​) Unregister the given submodules, i.e. remove the whole `submodule.$name` section from .git/config together with their work tree. Further calls to `git submodule update`, `git submodule foreach` and `git submodule sync` will skip any unregistered submodules until they are initialized again, so use this command if you don’t want to have a local checkout of the submodule in your working tree anymore. When the command is run without pathspec, it errors out, instead of deinit-ing everything, to prevent mistakes. If `--force` is specified, the submodule’s working tree will be removed even if it contains local modifications. If you really want to remove a submodule from the repository and commit that use [git-rm[1]](git-rm) instead. See [gitsubmodules[7]](gitsubmodules) for removal options. update [--init] [--remote] [-N|--no-fetch] [--[no-]recommend-shallow] [-f|--force] [--checkout|--rebase|--merge] [--reference <repository>] [--depth <depth>] [--recursive] [--jobs <n>] [--[no-]single-branch] [--filter <filter spec>] [--] [<path>…​] Update the registered submodules to match what the superproject expects by cloning missing submodules, fetching missing commits in submodules and updating the working tree of the submodules. The "updating" can be done in several ways depending on command line options and the value of `submodule.<name>.update` configuration variable. The command line option takes precedence over the configuration variable. If neither is given, a `checkout` is performed. The `update` procedures supported both from the command line as well as through the `submodule.<name>.update` configuration are: checkout the commit recorded in the superproject will be checked out in the submodule on a detached HEAD. If `--force` is specified, the submodule will be checked out (using `git checkout --force`), even if the commit specified in the index of the containing repository already matches the commit checked out in the submodule. rebase the current branch of the submodule will be rebased onto the commit recorded in the superproject. merge the commit recorded in the superproject will be merged into the current branch in the submodule. The following `update` procedures are only available via the `submodule.<name>.update` configuration variable: custom command arbitrary shell command that takes a single argument (the sha1 of the commit recorded in the superproject) is executed. When `submodule.<name>.update` is set to `!command`, the remainder after the exclamation mark is the custom command. none the submodule is not updated. If the submodule is not yet initialized, and you just want to use the setting as stored in `.gitmodules`, you can automatically initialize the submodule with the `--init` option. If `--recursive` is specified, this command will recurse into the registered submodules, and update any nested submodules within. If `--filter <filter spec>` is specified, the given partial clone filter will be applied to the submodule. See [git-rev-list[1]](git-rev-list) for details on filter specifications. set-branch (-b|--branch) <branch> [--] <path> set-branch (-d|--default) [--] <path> Sets the default remote tracking branch for the submodule. The `--branch` option allows the remote branch to be specified. The `--default` option removes the submodule.<name>.branch configuration key, which causes the tracking branch to default to the remote `HEAD`. set-url [--] <path> <newurl> Sets the URL of the specified submodule to <newurl>. Then, it will automatically synchronize the submodule’s new remote URL configuration. summary [--cached|--files] [(-n|--summary-limit) <n>] [commit] [--] [<path>…​] Show commit summary between the given commit (defaults to HEAD) and working tree/index. For a submodule in question, a series of commits in the submodule between the given super project commit and the index or working tree (switched by `--cached`) are shown. If the option `--files` is given, show the series of commits in the submodule between the index of the super project and the working tree of the submodule (this option doesn’t allow to use the `--cached` option or to provide an explicit commit). Using the `--submodule=log` option with [git-diff[1]](git-diff) will provide that information too. foreach [--recursive] <command> Evaluates an arbitrary shell command in each checked out submodule. The command has access to the variables $name, $sm\_path, $displaypath, $sha1 and $toplevel: $name is the name of the relevant submodule section in `.gitmodules`, $sm\_path is the path of the submodule as recorded in the immediate superproject, $displaypath contains the relative path from the current working directory to the submodules root directory, $sha1 is the commit as recorded in the immediate superproject, and $toplevel is the absolute path to the top-level of the immediate superproject. Note that to avoid conflicts with `$PATH` on Windows, the `$path` variable is now a deprecated synonym of `$sm_path` variable. Any submodules defined in the superproject but not checked out are ignored by this command. Unless given `--quiet`, foreach prints the name of each submodule before evaluating the command. If `--recursive` is given, submodules are traversed recursively (i.e. the given shell command is evaluated in nested submodules as well). A non-zero return from the command in any submodule causes the processing to terminate. This can be overridden by adding `|| :` to the end of the command. As an example, the command below will show the path and currently checked out commit for each submodule: ``` git submodule foreach 'echo $sm_path `git rev-parse HEAD`' ``` sync [--recursive] [--] [<path>…​] Synchronizes submodules' remote URL configuration setting to the value specified in `.gitmodules`. It will only affect those submodules which already have a URL entry in .git/config (that is the case when they are initialized or freshly added). This is useful when submodule URLs change upstream and you need to update your local repositories accordingly. `git submodule sync` synchronizes all submodules while `git submodule sync -- A` synchronizes submodule "A" only. If `--recursive` is specified, this command will recurse into the registered submodules, and sync any nested submodules within. absorbgitdirs If a git directory of a submodule is inside the submodule, move the git directory of the submodule into its superproject’s `$GIT_DIR/modules` path and then connect the git directory and its working directory by setting the `core.worktree` and adding a .git file pointing to the git directory embedded in the superprojects git directory. A repository that was cloned independently and later added as a submodule or old setups have the submodules git directory inside the submodule instead of embedded into the superprojects git directory. This command is recursive by default. Options ------- -q --quiet Only print error messages. --progress This option is only valid for add and update commands. Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --all This option is only valid for the deinit command. Unregister all submodules in the working tree. -b <branch> --branch <branch> Branch of repository to add as submodule. The name of the branch is recorded as `submodule.<name>.branch` in `.gitmodules` for `update --remote`. A special value of `.` is used to indicate that the name of the branch in the submodule should be the same name as the current branch in the current repository. If the option is not specified, it defaults to the remote `HEAD`. -f --force This option is only valid for add, deinit and update commands. When running add, allow adding an otherwise ignored submodule path. When running deinit the submodule working trees will be removed even if they contain local changes. When running update (only effective with the checkout procedure), throw away local changes in submodules when switching to a different commit; and always run a checkout operation in the submodule, even if the commit listed in the index of the containing repository matches the commit checked out in the submodule. --cached This option is only valid for status and summary commands. These commands typically use the commit found in the submodule HEAD, but with this option, the commit stored in the index is used instead. --files This option is only valid for the summary command. This command compares the commit in the index with that in the submodule HEAD when this option is used. -n --summary-limit This option is only valid for the summary command. Limit the summary size (number of commits shown in total). Giving 0 will disable the summary; a negative number means unlimited (the default). This limit only applies to modified submodules. The size is always limited to 1 for added/deleted/typechanged submodules. --remote This option is only valid for the update command. Instead of using the superproject’s recorded SHA-1 to update the submodule, use the status of the submodule’s remote-tracking branch. The remote used is branch’s remote (`branch.<name>.remote`), defaulting to `origin`. The remote branch used defaults to the remote `HEAD`, but the branch name may be overridden by setting the `submodule.<name>.branch` option in either `.gitmodules` or `.git/config` (with `.git/config` taking precedence). This works for any of the supported update procedures (`--checkout`, `--rebase`, etc.). The only change is the source of the target SHA-1. For example, `submodule update --remote --merge` will merge upstream submodule changes into the submodules, while `submodule update --merge` will merge superproject gitlink changes into the submodules. In order to ensure a current tracking branch state, `update --remote` fetches the submodule’s remote repository before calculating the SHA-1. If you don’t want to fetch, you should use `submodule update --remote --no-fetch`. Use this option to integrate changes from the upstream subproject with your submodule’s current HEAD. Alternatively, you can run `git pull` from the submodule, which is equivalent except for the remote branch name: `update --remote` uses the default upstream repository and `submodule.<name>.branch`, while `git pull` uses the submodule’s `branch.<name>.merge`. Prefer `submodule.<name>.branch` if you want to distribute the default upstream branch with the superproject and `branch.<name>.merge` if you want a more native feel while working in the submodule itself. -N --no-fetch This option is only valid for the update command. Don’t fetch new objects from the remote site. --checkout This option is only valid for the update command. Checkout the commit recorded in the superproject on a detached HEAD in the submodule. This is the default behavior, the main use of this option is to override `submodule.$name.update` when set to a value other than `checkout`. If the key `submodule.$name.update` is either not explicitly set or set to `checkout`, this option is implicit. --merge This option is only valid for the update command. Merge the commit recorded in the superproject into the current branch of the submodule. If this option is given, the submodule’s HEAD will not be detached. If a merge failure prevents this process, you will have to resolve the resulting conflicts within the submodule with the usual conflict resolution tools. If the key `submodule.$name.update` is set to `merge`, this option is implicit. --rebase This option is only valid for the update command. Rebase the current branch onto the commit recorded in the superproject. If this option is given, the submodule’s HEAD will not be detached. If a merge failure prevents this process, you will have to resolve these failures with [git-rebase[1]](git-rebase). If the key `submodule.$name.update` is set to `rebase`, this option is implicit. --init This option is only valid for the update command. Initialize all submodules for which "git submodule init" has not been called so far before updating. --name This option is only valid for the add command. It sets the submodule’s name to the given string instead of defaulting to its path. The name must be valid as a directory name and may not end with a `/`. --reference <repository> This option is only valid for add and update commands. These commands sometimes need to clone a remote repository. In this case, this option will be passed to the [git-clone[1]](git-clone) command. **NOTE**: Do **not** use this option unless you have read the note for [git-clone[1]](git-clone)'s `--reference`, `--shared`, and `--dissociate` options carefully. --dissociate This option is only valid for add and update commands. These commands sometimes need to clone a remote repository. In this case, this option will be passed to the [git-clone[1]](git-clone) command. **NOTE**: see the NOTE for the `--reference` option. --recursive This option is only valid for foreach, update, status and sync commands. Traverse submodules recursively. The operation is performed not only in the submodules of the current repo, but also in any nested submodules inside those submodules (and so on). --depth This option is valid for add and update commands. Create a `shallow` clone with a history truncated to the specified number of revisions. See [git-clone[1]](git-clone) --[no-]recommend-shallow This option is only valid for the update command. The initial clone of a submodule will use the recommended `submodule.<name>.shallow` as provided by the `.gitmodules` file by default. To ignore the suggestions use `--no-recommend-shallow`. -j <n> --jobs <n> This option is only valid for the update command. Clone new submodules in parallel with as many jobs. Defaults to the `submodule.fetchJobs` option. --[no-]single-branch This option is only valid for the update command. Clone only one branch during update: HEAD or one specified by --branch. <path>…​ Paths to submodule(s). When specified this will restrict the command to only operate on the submodules found at the specified paths. (This argument is required with add). Files ----- When initializing submodules, a `.gitmodules` file in the top-level directory of the containing repository is used to find the url of each submodule. This file should be formatted in the same way as `$GIT_DIR/config`. The key to each submodule url is "submodule.$name.url". See [gitmodules[5]](gitmodules) for details. See also -------- [gitsubmodules[7]](gitsubmodules), [gitmodules[5]](gitmodules).
programming_docs
git git-merge-file git-merge-file ============== Name ---- git-merge-file - Run a three-way file merge Synopsis -------- ``` git merge-file [-L <current-name> [-L <base-name> [-L <other-name>]]] [--ours|--theirs|--union] [-p|--stdout] [-q|--quiet] [--marker-size=<n>] [--[no-]diff3] <current-file> <base-file> <other-file> ``` Description ----------- `git merge-file` incorporates all changes that lead from the `<base-file>` to `<other-file>` into `<current-file>`. The result ordinarily goes into `<current-file>`. `git merge-file` is useful for combining separate changes to an original. Suppose `<base-file>` is the original, and both `<current-file>` and `<other-file>` are modifications of `<base-file>`, then `git merge-file` combines both changes. A conflict occurs if both `<current-file>` and `<other-file>` have changes in a common segment of lines. If a conflict is found, `git merge-file` normally outputs a warning and brackets the conflict with lines containing <<<<<<< and >>>>>>> markers. A typical conflict will look like this: ``` <<<<<<< A lines in file A ======= lines in file B >>>>>>> B ``` If there are conflicts, the user should edit the result and delete one of the alternatives. When `--ours`, `--theirs`, or `--union` option is in effect, however, these conflicts are resolved favouring lines from `<current-file>`, lines from `<other-file>`, or lines from both respectively. The length of the conflict markers can be given with the `--marker-size` option. The exit value of this program is negative on error, and the number of conflicts otherwise (truncated to 127 if there are more than that many conflicts). If the merge was clean, the exit value is 0. `git merge-file` is designed to be a minimal clone of RCS `merge`; that is, it implements all of RCS `merge`'s functionality which is needed by [git[1]](git). Options ------- -L <label> This option may be given up to three times, and specifies labels to be used in place of the corresponding file names in conflict reports. That is, `git merge-file -L x -L y -L z a b c` generates output that looks like it came from files x, y and z instead of from files a, b and c. -p Send results to standard output instead of overwriting `<current-file>`. -q Quiet; do not warn about conflicts. --diff3 Show conflicts in "diff3" style. --zdiff3 Show conflicts in "zdiff3" style. --ours --theirs --union Instead of leaving conflicts in the file, resolve conflicts favouring our (or their or both) side of the lines. Examples -------- `git merge-file README.my README README.upstream` combines the changes of README.my and README.upstream since README, tries to merge them and writes the result into README.my. `git merge-file -L a -L b -L c tmp/a123 tmp/b234 tmp/c345` merges tmp/a123 and tmp/c345 with the base tmp/b234, but uses labels `a` and `c` instead of `tmp/a123` and `tmp/c345`. git git-sparse-checkout git-sparse-checkout =================== Name ---- git-sparse-checkout - Reduce your working tree to a subset of tracked files Synopsis -------- ``` git sparse-checkout (init | list | set | add | reapply | disable) [<options>] ``` Description ----------- This command is used to create sparse checkouts, which change the working tree from having all tracked files present to only having a subset of those files. It can also switch which subset of files are present, or undo and go back to having all tracked files present in the working copy. The subset of files is chosen by providing a list of directories in cone mode (the default), or by providing a list of patterns in non-cone mode. When in a sparse-checkout, other Git commands behave a bit differently. For example, switching branches will not update paths outside the sparse-checkout directories/patterns, and `git commit -a` will not record paths outside the sparse-checkout directories/patterns as deleted. THIS COMMAND IS EXPERIMENTAL. ITS BEHAVIOR, AND THE BEHAVIOR OF OTHER COMMANDS IN THE PRESENCE OF SPARSE-CHECKOUTS, WILL LIKELY CHANGE IN THE FUTURE. Commands -------- *list* Describe the directories or patterns in the sparse-checkout file. *set* Enable the necessary sparse-checkout config settings (`core.sparseCheckout`, `core.sparseCheckoutCone`, and `index.sparse`) if they are not already set to the desired values, populate the sparse-checkout file from the list of arguments following the `set` subcommand, and update the working directory to match. To ensure that adjusting the sparse-checkout settings within a worktree does not alter the sparse-checkout settings in other worktrees, the `set` subcommand will upgrade your repository config to use worktree-specific config if not already present. The sparsity defined by the arguments to the `set` subcommand are stored in the worktree-specific sparse-checkout file. See [git-worktree[1]](git-worktree) and the documentation of `extensions.worktreeConfig` in [git-config[1]](git-config) for more details. When the `--stdin` option is provided, the directories or patterns are read from standard in as a newline-delimited list instead of from the arguments. By default, the input list is considered a list of directories, matching the output of `git ls-tree -d --name-only`. This includes interpreting pathnames that begin with a double quote (") as C-style quoted strings. Note that all files under the specified directories (at any depth) will be included in the sparse checkout, as well as files that are siblings of either the given directory or any of its ancestors (see `CONE PATTERN SET` below for more details). In the past, this was not the default, and `--cone` needed to be specified or `core.sparseCheckoutCone` needed to be enabled. When `--no-cone` is passed, the input list is considered a list of patterns. This mode has a number of drawbacks, including not working with some options like `--sparse-index`. As explained in the "Non-cone Problems" section below, we do not recommend using it. Use the `--[no-]sparse-index` option to use a sparse index (the default is to not use it). A sparse index reduces the size of the index to be more closely aligned with your sparse-checkout definition. This can have significant performance advantages for commands such as `git status` or `git add`. This feature is still experimental. Some commands might be slower with a sparse index until they are properly integrated with the feature. **WARNING:** Using a sparse index requires modifying the index in a way that is not completely understood by external tools. If you have trouble with this compatibility, then run `git sparse-checkout init --no-sparse-index` to rewrite your index to not be sparse. Older versions of Git will not understand the sparse directory entries index extension and may fail to interact with your repository until it is disabled. *add* Update the sparse-checkout file to include additional directories (in cone mode) or patterns (in non-cone mode). By default, these directories or patterns are read from the command-line arguments, but they can be read from stdin using the `--stdin` option. *reapply* Reapply the sparsity pattern rules to paths in the working tree. Commands like merge or rebase can materialize paths to do their work (e.g. in order to show you a conflict), and other sparse-checkout commands might fail to sparsify an individual file (e.g. because it has unstaged changes or conflicts). In such cases, it can make sense to run `git sparse-checkout reapply` later after cleaning up affected paths (e.g. resolving conflicts, undoing or committing changes, etc.). The `reapply` command can also take `--[no-]cone` and `--[no-]sparse-index` flags, with the same meaning as the flags from the `set` command, in order to change which sparsity mode you are using without needing to also respecify all sparsity paths. *disable* Disable the `core.sparseCheckout` config setting, and restore the working directory to include all files. *init* Deprecated command that behaves like `set` with no specified paths. May be removed in the future. Historically, `set` did not handle all the necessary config settings, which meant that both `init` and `set` had to be called. Invoking both meant the `init` step would first remove nearly all tracked files (and in cone mode, ignored files too), then the `set` step would add many of the tracked files (but not ignored files) back. In addition to the lost files, the performance and UI of this combination was poor. Also, historically, `init` would not actually initialize the sparse-checkout file if it already existed. This meant it was possible to return to a sparse-checkout without remembering which paths to pass to a subsequent `set` or `add` command. However, `--cone` and `--sparse-index` options would not be remembered across the disable command, so the easy restore of calling a plain `init` decreased in utility. Examples -------- `git sparse-checkout set MY/DIR1 SUB/DIR2` Change to a sparse checkout with all files (at any depth) under MY/DIR1/ and SUB/DIR2/ present in the working copy (plus all files immediately under MY/ and SUB/ and the toplevel directory). If already in a sparse checkout, change which files are present in the working copy to this new selection. Note that this command will also delete all ignored files in any directory that no longer has either tracked or non-ignored-untracked files present. `git sparse-checkout disable` Repopulate the working directory with all files, disabling sparse checkouts. `git sparse-checkout add SOME/DIR/ECTORY` Add all files under SOME/DIR/ECTORY/ (at any depth) to the sparse checkout, as well as all files immediately under SOME/DIR/ and immediately under SOME/. Must already be in a sparse checkout before using this command. `git sparse-checkout reapply` It is possible for commands to update the working tree in a way that does not respect the selected sparsity directories. This can come from tools external to Git writing files, or even affect Git commands because of either special cases (such as hitting conflicts when merging/rebasing), or because some commands didn’t fully support sparse checkouts (e.g. the old `recursive` merge backend had only limited support). This command reapplies the existing sparse directory specifications to make the working directory match. Internals — sparse checkout --------------------------- "Sparse checkout" allows populating the working directory sparsely. It uses the skip-worktree bit (see [git-update-index[1]](git-update-index)) to tell Git whether a file in the working directory is worth looking at. If the skip-worktree bit is set, and the file is not present in the working tree, then its absence is ignored. Git will avoid populating the contents of those files, which makes a sparse checkout helpful when working in a repository with many files, but only a few are important to the current user. The `$GIT_DIR/info/sparse-checkout` file is used to define the skip-worktree reference bitmap. When Git updates the working directory, it updates the skip-worktree bits in the index based on this file. The files matching the patterns in the file will appear in the working directory, and the rest will not. Internals — non-cone problems ----------------------------- The `$GIT_DIR/info/sparse-checkout` file populated by the `set` and `add` subcommands is defined to be a bunch of patterns (one per line) using the same syntax as `.gitignore` files. In cone mode, these patterns are restricted to matching directories (and users only ever need supply or see directory names), while in non-cone mode any gitignore-style pattern is permitted. Using the full gitignore-style patterns in non-cone mode has a number of shortcomings: * Fundamentally, it makes various worktree-updating processes (pull, merge, rebase, switch, reset, checkout, etc.) require O(N\*M) pattern matches, where N is the number of patterns and M is the number of paths in the index. This scales poorly. * Avoiding the scaling issue has to be done via limiting the number of patterns via specifying leading directory name or glob. * Passing globs on the command line is error-prone as users may forget to quote the glob, causing the shell to expand it into all matching files and pass them all individually along to sparse-checkout set/add. While this could also be a problem with e.g. "git grep — \*.c", mistakes with grep/log/status appear in the immediate output. With sparse-checkout, the mistake gets recorded at the time the sparse-checkout command is run and might not be problematic until the user later switches branches or rebases or merges, thus putting a delay between the user’s error and when they have a chance to catch/notice it. * Related to the previous item, sparse-checkout has an `add` subcommand but no `remove` subcommand. Even if a `remove` subcommand were added, undoing an accidental unquoted glob runs the risk of "removing too much", as it may remove entries that had been included before the accidental add. * Non-cone mode uses gitignore-style patterns to select what to **include** (with the exception of negated patterns), while .gitignore files use gitignore-style patterns to select what to **exclude** (with the exception of negated patterns). The documentation on gitignore-style patterns usually does not talk in terms of matching or non-matching, but on what the user wants to "exclude". This can cause confusion for users trying to learn how to specify sparse-checkout patterns to get their desired behavior. * Every other git subcommand that wants to provide "special path pattern matching" of some sort uses pathspecs, but non-cone mode for sparse-checkout uses gitignore patterns, which feels inconsistent. * It has edge cases where the "right" behavior is unclear. Two examples: ``` First, two users are in a subdirectory, and the first runs git sparse-checkout set '/toplevel-dir/*.c' while the second runs git sparse-checkout set relative-dir Should those arguments be transliterated into current/subdirectory/toplevel-dir/*.c and current/subdirectory/relative-dir before inserting into the sparse-checkout file? The user who typed the first command is probably aware that arguments to set/add are supposed to be patterns in non-cone mode, and probably would not be happy with such a transliteration. However, many gitignore-style patterns are just paths, which might be what the user who typed the second command was thinking, and they'd be upset if their argument wasn't transliterated. ``` ``` Second, what should bash-completion complete on for set/add commands for non-cone users? If it suggests paths, is it exacerbating the problem above? Also, if it suggests paths, what if the user has a file or directory that begins with either a '!' or '#' or has a '*', '\', '?', '[', or ']' in its name? And if it suggests paths, will it complete "/pro" to "/proc" (in the root filesytem) rather than to "/progress.txt" in the current directory? (Note that users are likely to want to start paths with a leading '/' in non-cone mode, for the same reason that .gitignore files often have one.) Completing on files or directories might give nasty surprises in all these cases. ``` * The excessive flexibility made other extensions essentially impractical. `--sparse-index` is likely impossible in non-cone mode; even if it is somehow feasible, it would have been far more work to implement and may have been too slow in practice. Some ideas for adding coupling between partial clones and sparse checkouts are only practical with a more restricted set of paths as well. For all these reasons, non-cone mode is deprecated. Please switch to using cone mode. Internals — cone mode handling ------------------------------ The "cone mode", which is the default, lets you specify only what directories to include. For any directory specified, all paths below that directory will be included, and any paths immediately under leading directories (including the toplevel directory) will also be included. Thus, if you specified the directory Documentation/technical/ then your sparse checkout would contain: * all files in the toplevel-directory * all files immediately under Documentation/ * all files at any depth under Documentation/technical/ Also, in cone mode, even if no directories are specified, then the files in the toplevel directory will be included. When changing the sparse-checkout patterns in cone mode, Git will inspect each tracked directory that is not within the sparse-checkout cone to see if it contains any untracked files. If all of those files are ignored due to the `.gitignore` patterns, then the directory will be deleted. If any of the untracked files within that directory is not ignored, then no deletions will occur within that directory and a warning message will appear. If these files are important, then reset your sparse-checkout definition so they are included, use `git add` and `git commit` to store them, then remove any remaining files manually to ensure Git can behave optimally. See also the "Internals — Cone Pattern Set" section to learn how the directories are transformed under the hood into a subset of the Full Pattern Set of sparse-checkout. Internals — full pattern set ---------------------------- The full pattern set allows for arbitrary pattern matches and complicated inclusion/exclusion rules. These can result in O(N\*M) pattern matches when updating the index, where N is the number of patterns and M is the number of paths in the index. To combat this performance issue, a more restricted pattern set is allowed when `core.sparseCheckoutCone` is enabled. The sparse-checkout file uses the same syntax as `.gitignore` files; see [gitignore[5]](gitignore) for details. Here, though, the patterns are usually being used to select which files to include rather than which files to exclude. (However, it can get a bit confusing since gitignore-style patterns have negations defined by patterns which begin with a `!`, so you can also select files to `not` include.) For example, to select everything, and then to remove the file `unwanted` (so that every file will appear in your working tree except the file named `unwanted`): ``` git sparse-checkout set --no-cone '/*' '!unwanted' ``` These patterns are just placed into the `$GIT_DIR/info/sparse-checkout` as-is, so the contents of that file at this point would be ``` /* !unwanted ``` See also the "Sparse Checkout" section of [git-read-tree[1]](git-read-tree) to learn more about the gitignore-style patterns used in sparse checkouts. Internals — cone pattern set ---------------------------- In cone mode, only directories are accepted, but they are translated into the same gitignore-style patterns used in the full pattern set. We refer to the particular patterns used in those mode as being of one of two types: 1. **Recursive:** All paths inside a directory are included. 2. **Parent:** All files immediately inside a directory are included. Since cone mode always includes files at the toplevel, when running `git sparse-checkout set` with no directories specified, the toplevel directory is added as a parent pattern. At this point, the sparse-checkout file contains the following patterns: ``` /* !/*/ ``` This says "include everything immediately under the toplevel directory, but nothing at any level below that." When in cone mode, the `git sparse-checkout set` subcommand takes a list of directories. The command `git sparse-checkout set A/B/C` sets the directory `A/B/C` as a recursive pattern, the directories `A` and `A/B` are added as parent patterns. The resulting sparse-checkout file is now ``` /* !/*/ /A/ !/A/*/ /A/B/ !/A/B/*/ /A/B/C/ ``` Here, order matters, so the negative patterns are overridden by the positive patterns that appear lower in the file. Unless `core.sparseCheckoutCone` is explicitly set to `false`, Git will parse the sparse-checkout file expecting patterns of these types. Git will warn if the patterns do not match. If the patterns do match the expected format, then Git will use faster hash-based algorithms to compute inclusion in the sparse-checkout. If they do not match, git will behave as though `core.sparseCheckoutCone` was false, regardless of its setting. In the cone mode case, despite the fact that full patterns are written to the $GIT\_DIR/info/sparse-checkout file, the `git sparse-checkout list` subcommand will list the directories that define the recursive patterns. For the example sparse-checkout file above, the output is as follows: ``` $ git sparse-checkout list A/B/C ``` If `core.ignoreCase=true`, then the pattern-matching algorithm will use a case-insensitive check. This corrects for case mismatched filenames in the `git sparse-checkout set` command to reflect the expected cone in the working directory. Internals — submodules ---------------------- If your repository contains one or more submodules, then submodules are populated based on interactions with the `git submodule` command. Specifically, `git submodule init -- <path>` will ensure the submodule at `<path>` is present, while `git submodule deinit [-f] -- <path>` will remove the files for the submodule at `<path>` (including any untracked files, uncommitted changes, and unpushed history). Similar to how sparse-checkout removes files from the working tree but still leaves entries in the index, deinitialized submodules are removed from the working directory but still have an entry in the index. Since submodules may have unpushed changes or untracked files, removing them could result in data loss. Thus, changing sparse inclusion/exclusion rules will not cause an already checked out submodule to be removed from the working copy. Said another way, just as `checkout` will not cause submodules to be automatically removed or initialized even when switching between branches that remove or add submodules, using `sparse-checkout` to reduce or expand the scope of "interesting" files will not cause submodules to be automatically deinitialized or initialized either. Further, the above facts mean that there are multiple reasons that "tracked" files might not be present in the working copy: sparsity pattern application from sparse-checkout, and submodule initialization state. Thus, commands like `git grep` that work on tracked files in the working copy may return results that are limited by either or both of these restrictions. See also -------- [git-read-tree[1]](git-read-tree) [gitignore[5]](gitignore)
programming_docs
git git-switch git-switch ========== Name ---- git-switch - Switch branches Synopsis -------- ``` git switch [<options>] [--no-guess] <branch> git switch [<options>] --detach [<start-point>] git switch [<options>] (-c|-C) <new-branch> [<start-point>] git switch [<options>] --orphan <new-branch> ``` Description ----------- Switch to a specified branch. The working tree and the index are updated to match the branch. All new commits will be added to the tip of this branch. Optionally a new branch could be created with either `-c`, `-C`, automatically from a remote branch of same name (see `--guess`), or detach the working tree from any branch with `--detach`, along with switching. Switching branches does not require a clean index and working tree (i.e. no differences compared to `HEAD`). The operation is aborted however if the operation leads to loss of local changes, unless told otherwise with `--discard-changes` or `--merge`. THIS COMMAND IS EXPERIMENTAL. THE BEHAVIOR MAY CHANGE. Options ------- <branch> Branch to switch to. <new-branch> Name for the new branch. <start-point> The starting point for the new branch. Specifying a `<start-point>` allows you to create a branch based on some other point in history than where HEAD currently points. (Or, in the case of `--detach`, allows you to inspect and detach from some other point.) You can use the `@{-N}` syntax to refer to the N-th last branch/commit switched to using "git switch" or "git checkout" operation. You may also specify `-` which is synonymous to `@{-1}`. This is often used to switch quickly between two branches, or to undo a branch switch by mistake. As a special case, you may use `A...B` as a shortcut for the merge base of `A` and `B` if there is exactly one merge base. You can leave out at most one of `A` and `B`, in which case it defaults to `HEAD`. -c <new-branch> --create <new-branch> Create a new branch named `<new-branch>` starting at `<start-point>` before switching to the branch. This is a convenient shortcut for: ``` $ git branch <new-branch> $ git switch <new-branch> ``` -C <new-branch> --force-create <new-branch> Similar to `--create` except that if `<new-branch>` already exists, it will be reset to `<start-point>`. This is a convenient shortcut for: ``` $ git branch -f <new-branch> $ git switch <new-branch> ``` -d --detach Switch to a commit for inspection and discardable experiments. See the "DETACHED HEAD" section in [git-checkout[1]](git-checkout) for details. --guess --no-guess If `<branch>` is not found but there does exist a tracking branch in exactly one remote (call it `<remote>`) with a matching name, treat as equivalent to ``` $ git switch -c <branch> --track <remote>/<branch> ``` If the branch exists in multiple remotes and one of them is named by the `checkout.defaultRemote` configuration variable, we’ll use that one for the purposes of disambiguation, even if the `<branch>` isn’t unique across all remotes. Set it to e.g. `checkout.defaultRemote=origin` to always checkout remote branches from there if `<branch>` is ambiguous but exists on the `origin` remote. See also `checkout.defaultRemote` in [git-config[1]](git-config). `--guess` is the default behavior. Use `--no-guess` to disable it. The default behavior can be set via the `checkout.guess` configuration variable. -f --force An alias for `--discard-changes`. --discard-changes Proceed even if the index or the working tree differs from `HEAD`. Both the index and working tree are restored to match the switching target. If `--recurse-submodules` is specified, submodule content is also restored to match the switching target. This is used to throw away local changes. -m --merge If you have local modifications to one or more files that are different between the current branch and the branch to which you are switching, the command refuses to switch branches in order to preserve your modifications in context. However, with this option, a three-way merge between the current branch, your working tree contents, and the new branch is done, and you will be on the new branch. When a merge conflict happens, the index entries for conflicting paths are left unmerged, and you need to resolve the conflicts and mark the resolved paths with `git add` (or `git rm` if the merge should result in deletion of the path). --conflict=<style> The same as `--merge` option above, but changes the way the conflicting hunks are presented, overriding the `merge.conflictStyle` configuration variable. Possible values are "merge" (default), "diff3", and "zdiff3". -q --quiet Quiet, suppress feedback messages. --progress --no-progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless `--quiet` is specified. This flag enables progress reporting even if not attached to a terminal, regardless of `--quiet`. -t --track [direct|inherit] When creating a new branch, set up "upstream" configuration. `-c` is implied. See `--track` in [git-branch[1]](git-branch) for details. If no `-c` option is given, the name of the new branch will be derived from the remote-tracking branch, by looking at the local part of the refspec configured for the corresponding remote, and then stripping the initial part up to the "\*". This would tell us to use `hack` as the local branch when branching off of `origin/hack` (or `remotes/origin/hack`, or even `refs/remotes/origin/hack`). If the given name has no slash, or the above guessing results in an empty name, the guessing is aborted. You can explicitly give a name with `-c` in such a case. --no-track Do not set up "upstream" configuration, even if the `branch.autoSetupMerge` configuration variable is true. --orphan <new-branch> Create a new `orphan` branch, named `<new-branch>`. All tracked files are removed. --ignore-other-worktrees `git switch` refuses when the wanted ref is already checked out by another worktree. This option makes it check the ref out anyway. In other words, the ref can be held by more than one worktree. --recurse-submodules --no-recurse-submodules Using `--recurse-submodules` will update the content of all active submodules according to the commit recorded in the superproject. If nothing (or `--no-recurse-submodules`) is used, submodules working trees will not be updated. Just like [git-submodule[1]](git-submodule), this will detach `HEAD` of the submodules. Examples -------- The following command switches to the "master" branch: ``` $ git switch master ``` After working in the wrong branch, switching to the correct branch would be done using: ``` $ git switch mytopic ``` However, your "wrong" branch and correct "mytopic" branch may differ in files that you have modified locally, in which case the above switch would fail like this: ``` $ git switch mytopic error: You have local changes to 'frotz'; not switching branches. ``` You can give the `-m` flag to the command, which would try a three-way merge: ``` $ git switch -m mytopic Auto-merging frotz ``` After this three-way merge, the local modifications are `not` registered in your index file, so `git diff` would show you what changes you made since the tip of the new branch. To switch back to the previous branch before we switched to mytopic (i.e. "master" branch): ``` $ git switch - ``` You can grow a new branch from any commit. For example, switch to "HEAD~3" and create branch "fixup": ``` $ git switch -c fixup HEAD~3 Switched to a new branch 'fixup' ``` If you want to start a new branch from a remote branch of the same name: ``` $ git switch new-topic Branch 'new-topic' set up to track remote branch 'new-topic' from 'origin' Switched to a new branch 'new-topic' ``` To check out commit `HEAD~3` for temporary inspection or experiment without creating a new branch: ``` $ git switch --detach HEAD~3 HEAD is now at 9fc9555312 Merge branch 'cc/shared-index-permbits' ``` If it turns out whatever you have done is worth keeping, you can always create a new name for it (without switching away): ``` $ git switch -c good-surprises ``` Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: checkout.defaultRemote When you run `git checkout <something>` or `git switch <something>` and only have one remote, it may implicitly fall back on checking out and tracking e.g. `origin/<something>`. This stops working as soon as you have more than one remote with a `<something>` reference. This setting allows for setting the name of a preferred remote that should always win when it comes to disambiguation. The typical use-case is to set this to `origin`. Currently this is used by [git-switch[1]](git-switch) and [git-checkout[1]](git-checkout) when `git checkout <something>` or `git switch <something>` will checkout the `<something>` branch on another remote, and by [git-worktree[1]](git-worktree) when `git worktree add` refers to a remote branch. This setting might be used for other checkout-like commands or functionality in the future. checkout.guess Provides the default value for the `--guess` or `--no-guess` option in `git checkout` and `git switch`. See [git-switch[1]](git-switch) and [git-checkout[1]](git-checkout). checkout.workers The number of parallel workers to use when updating the working tree. The default is one, i.e. sequential execution. If set to a value less than one, Git will use as many workers as the number of logical cores available. This setting and `checkout.thresholdForParallelism` affect all commands that perform checkout. E.g. checkout, clone, reset, sparse-checkout, etc. Note: parallel checkout usually delivers better performance for repositories located on SSDs or over NFS. For repositories on spinning disks and/or machines with a small number of cores, the default sequential checkout often performs better. The size and compression level of a repository might also influence how well the parallel version performs. checkout.thresholdForParallelism When running parallel checkout with a small number of files, the cost of subprocess spawning and inter-process communication might outweigh the parallelization gains. This setting allows to define the minimum number of files for which parallel checkout should be attempted. The default is 100. See also -------- [git-checkout[1]](git-checkout), [git-branch[1]](git-branch) git git-p4 git-p4 ====== Name ---- git-p4 - Import from and submit to Perforce repositories Synopsis -------- ``` git p4 clone [<sync-options>] [<clone-options>] <p4-depot-path>…​ git p4 sync [<sync-options>] [<p4-depot-path>…​] git p4 rebase git p4 submit [<submit-options>] [<master-branch-name>] ``` Description ----------- This command provides a way to interact with p4 repositories using Git. Create a new Git repository from an existing p4 repository using `git p4 clone`, giving it one or more p4 depot paths. Incorporate new commits from p4 changes with `git p4 sync`. The `sync` command is also used to include new branches from other p4 depot paths. Submit Git changes back to p4 using `git p4 submit`. The command `git p4 rebase` does a sync plus rebases the current branch onto the updated p4 remote branch. Examples -------- * Clone a repository: ``` $ git p4 clone //depot/path/project ``` * Do some work in the newly created Git repository: ``` $ cd project $ vi foo.h $ git commit -a -m "edited foo.h" ``` * Update the Git repository with recent changes from p4, rebasing your work on top: ``` $ git p4 rebase ``` * Submit your commits back to p4: ``` $ git p4 submit ``` Commands -------- ### Clone Generally, `git p4 clone` is used to create a new Git directory from an existing p4 repository: ``` $ git p4 clone //depot/path/project ``` This: 1. Creates an empty Git repository in a subdirectory called `project`. 2. Imports the full contents of the head revision from the given p4 depot path into a single commit in the Git branch `refs/remotes/p4/master`. 3. Creates a local branch, `master` from this remote and checks it out. To reproduce the entire p4 history in Git, use the `@all` modifier on the depot path: ``` $ git p4 clone //depot/path/project@all ``` ### Sync As development continues in the p4 repository, those changes can be included in the Git repository using: ``` $ git p4 sync ``` This command finds new changes in p4 and imports them as Git commits. P4 repositories can be added to an existing Git repository using `git p4 sync` too: ``` $ mkdir repo-git $ cd repo-git $ git init $ git p4 sync //path/in/your/perforce/depot ``` This imports the specified depot into `refs/remotes/p4/master` in an existing Git repository. The `--branch` option can be used to specify a different branch to be used for the p4 content. If a Git repository includes branches `refs/remotes/origin/p4`, these will be fetched and consulted first during a `git p4 sync`. Since importing directly from p4 is considerably slower than pulling changes from a Git remote, this can be useful in a multi-developer environment. If there are multiple branches, doing `git p4 sync` will automatically use the "BRANCH DETECTION" algorithm to try to partition new changes into the right branch. This can be overridden with the `--branch` option to specify just a single branch to update. ### Rebase A common working pattern is to fetch the latest changes from the p4 depot and merge them with local uncommitted changes. Often, the p4 repository is the ultimate location for all code, thus a rebase workflow makes sense. This command does `git p4 sync` followed by `git rebase` to move local commits on top of updated p4 changes. ``` $ git p4 rebase ``` ### Submit Submitting changes from a Git repository back to the p4 repository requires a separate p4 client workspace. This should be specified using the `P4CLIENT` environment variable or the Git configuration variable `git-p4.client`. The p4 client must exist, but the client root will be created and populated if it does not already exist. To submit all changes that are in the current Git branch but not in the `p4/master` branch, use: ``` $ git p4 submit ``` To specify a branch other than the current one, use: ``` $ git p4 submit topicbranch ``` To specify a single commit or a range of commits, use: ``` $ git p4 submit --commit <sha1> $ git p4 submit --commit <sha1..sha1> ``` The upstream reference is generally `refs/remotes/p4/master`, but can be overridden using the `--origin=` command-line option. The p4 changes will be created as the user invoking `git p4 submit`. The `--preserve-user` option will cause ownership to be modified according to the author of the Git commit. This option requires admin privileges in p4, which can be granted using `p4 protect`. To shelve changes instead of submitting, use `--shelve` and `--update-shelve`: ``` $ git p4 submit --shelve $ git p4 submit --update-shelve 1234 --update-shelve 2345 ``` ### Unshelve Unshelving will take a shelved P4 changelist, and produce the equivalent git commit in the branch refs/remotes/p4-unshelved/<changelist>. The git commit is created relative to the current origin revision (HEAD by default). A parent commit is created based on the origin, and then the unshelve commit is created based on that. The origin revision can be changed with the "--origin" option. If the target branch in refs/remotes/p4-unshelved already exists, the old one will be renamed. ``` $ git p4 sync $ git p4 unshelve 12345 $ git show p4-unshelved/12345 <submit more changes via p4 to the same files> $ git p4 unshelve 12345 <refuses to unshelve until git is in sync with p4 again> ``` Options ------- ### General options All commands except clone accept these options. --git-dir <dir> Set the `GIT_DIR` environment variable. See [git[1]](git). -v --verbose Provide more progress information. ### Sync options These options can be used in the initial `clone` as well as in subsequent `sync` operations. --branch <ref> Import changes into <ref> instead of refs/remotes/p4/master. If <ref> starts with refs/, it is used as is. Otherwise, if it does not start with p4/, that prefix is added. By default a <ref> not starting with refs/ is treated as the name of a remote-tracking branch (under refs/remotes/). This behavior can be modified using the --import-local option. The default <ref> is "master". This example imports a new remote "p4/proj2" into an existing Git repository: ``` $ git init $ git p4 sync --branch=refs/remotes/p4/proj2 //depot/proj2 ``` --detect-branches Use the branch detection algorithm to find new paths in p4. It is documented below in "BRANCH DETECTION". --changesfile <file> Import exactly the p4 change numbers listed in `file`, one per line. Normally, `git p4` inspects the current p4 repository state and detects the changes it should import. --silent Do not print any progress information. --detect-labels Query p4 for labels associated with the depot paths, and add them as tags in Git. Limited usefulness as only imports labels associated with new changelists. Deprecated. --import-labels Import labels from p4 into Git. --import-local By default, p4 branches are stored in `refs/remotes/p4/`, where they will be treated as remote-tracking branches by [git-branch[1]](git-branch) and other commands. This option instead puts p4 branches in `refs/heads/p4/`. Note that future sync operations must specify `--import-local` as well so that they can find the p4 branches in refs/heads. --max-changes <n> Import at most `n` changes, rather than the entire range of changes included in the given revision specifier. A typical usage would be use `@all` as the revision specifier, but then to use `--max-changes 1000` to import only the last 1000 revisions rather than the entire revision history. --changes-block-size <n> The internal block size to use when converting a revision specifier such as `@all` into a list of specific change numbers. Instead of using a single call to `p4 changes` to find the full list of changes for the conversion, there are a sequence of calls to `p4 changes -m`, each of which requests one block of changes of the given size. The default block size is 500, which should usually be suitable. --keep-path The mapping of file names from the p4 depot path to Git, by default, involves removing the entire depot path. With this option, the full p4 depot path is retained in Git. For example, path `//depot/main/foo/bar.c`, when imported from `//depot/main/`, becomes `foo/bar.c`. With `--keep-path`, the Git path is instead `depot/main/foo/bar.c`. --use-client-spec Use a client spec to find the list of interesting files in p4. See the "CLIENT SPEC" section below. -/ <path> Exclude selected depot paths when cloning or syncing. ### Clone options These options can be used in an initial `clone`, along with the `sync` options described above. --destination <directory> Where to create the Git repository. If not provided, the last component in the p4 depot path is used to create a new directory. --bare Perform a bare clone. See [git-clone[1]](git-clone). ### Submit options These options can be used to modify `git p4 submit` behavior. --origin <commit> Upstream location from which commits are identified to submit to p4. By default, this is the most recent p4 commit reachable from `HEAD`. -M Detect renames. See [git-diff[1]](git-diff). Renames will be represented in p4 using explicit `move` operations. There is no corresponding option to detect copies, but there are variables for both moves and copies. --preserve-user Re-author p4 changes before submitting to p4. This option requires p4 admin privileges. --export-labels Export tags from Git as p4 labels. Tags found in Git are applied to the perforce working directory. -n --dry-run Show just what commits would be submitted to p4; do not change state in Git or p4. --prepare-p4-only Apply a commit to the p4 workspace, opening, adding and deleting files in p4 as for a normal submit operation. Do not issue the final "p4 submit", but instead print a message about how to submit manually or revert. This option always stops after the first (oldest) commit. Git tags are not exported to p4. --shelve Instead of submitting create a series of shelved changelists. After creating each shelve, the relevant files are reverted/deleted. If you have multiple commits pending multiple shelves will be created. --update-shelve CHANGELIST Update an existing shelved changelist with this commit. Implies --shelve. Repeat for multiple shelved changelists. --conflict=(ask|skip|quit) Conflicts can occur when applying a commit to p4. When this happens, the default behavior ("ask") is to prompt whether to skip this commit and continue, or quit. This option can be used to bypass the prompt, causing conflicting commits to be automatically skipped, or to quit trying to apply commits, without prompting. --branch <branch> After submitting, sync this named branch instead of the default p4/master. See the "Sync options" section above for more information. --commit (<sha1>|<sha1>..<sha1>) Submit only the specified commit or range of commits, instead of the full list of changes that are in the current Git branch. --disable-rebase Disable the automatic rebase after all commits have been successfully submitted. Can also be set with git-p4.disableRebase. --disable-p4sync Disable the automatic sync of p4/master from Perforce after commits have been submitted. Implies --disable-rebase. Can also be set with git-p4.disableP4Sync. Sync with origin/master still goes ahead if possible. Hooks for submit ---------------- ### p4-pre-submit The `p4-pre-submit` hook is executed if it exists and is executable. The hook takes no parameters and nothing from standard input. Exiting with non-zero status from this script prevents `git-p4 submit` from launching. It can be bypassed with the `--no-verify` command line option. One usage scenario is to run unit tests in the hook. ### p4-prepare-changelist The `p4-prepare-changelist` hook is executed right after preparing the default changelist message and before the editor is started. It takes one parameter, the name of the file that contains the changelist text. Exiting with a non-zero status from the script will abort the process. The purpose of the hook is to edit the message file in place, and it is not suppressed by the `--no-verify` option. This hook is called even if `--prepare-p4-only` is set. ### p4-changelist The `p4-changelist` hook is executed after the changelist message has been edited by the user. It can be bypassed with the `--no-verify` option. It takes a single parameter, the name of the file that holds the proposed changelist text. Exiting with a non-zero status causes the command to abort. The hook is allowed to edit the changelist file and can be used to normalize the text into some project standard format. It can also be used to refuse the Submit after inspect the message file. ### p4-post-changelist The `p4-post-changelist` hook is invoked after the submit has successfully occurred in P4. It takes no parameters and is meant primarily for notification and cannot affect the outcome of the git p4 submit action. ### Rebase options These options can be used to modify `git p4 rebase` behavior. --import-labels Import p4 labels. ### Unshelve options --origin Sets the git refspec against which the shelved P4 changelist is compared. Defaults to p4/master. Depot path syntax ----------------- The p4 depot path argument to `git p4 sync` and `git p4 clone` can be one or more space-separated p4 depot paths, with an optional p4 revision specifier on the end: "//depot/my/project" Import one commit with all files in the `#head` change under that tree. "//depot/my/project@all" Import one commit for each change in the history of that depot path. "//depot/my/project@1,6" Import only changes 1 through 6. "//depot/proj1@all //depot/proj2@all" Import all changes from both named depot paths into a single repository. Only files below these directories are included. There is not a subdirectory in Git for each "proj1" and "proj2". You must use the `--destination` option when specifying more than one depot path. The revision specifier must be specified identically on each depot path. If there are files in the depot paths with the same name, the path with the most recently updated version of the file is the one that appears in Git. See `p4 help revisions` for the full syntax of p4 revision specifiers. Client spec ----------- The p4 client specification is maintained with the `p4 client` command and contains among other fields, a View that specifies how the depot is mapped into the client repository. The `clone` and `sync` commands can consult the client spec when given the `--use-client-spec` option or when the useClientSpec variable is true. After `git p4 clone`, the useClientSpec variable is automatically set in the repository configuration file. This allows future `git p4 submit` commands to work properly; the submit command looks only at the variable and does not have a command-line option. The full syntax for a p4 view is documented in `p4 help views`. `git p4` knows only a subset of the view syntax. It understands multi-line mappings, overlays with `+`, exclusions with `-` and double-quotes around whitespace. Of the possible wildcards, `git p4` only handles `…​`, and only when it is at the end of the path. `git p4` will complain if it encounters an unhandled wildcard. Bugs in the implementation of overlap mappings exist. If multiple depot paths map through overlays to the same location in the repository, `git p4` can choose the wrong one. This is hard to solve without dedicating a client spec just for `git p4`. The name of the client can be given to `git p4` in multiple ways. The variable `git-p4.client` takes precedence if it exists. Otherwise, normal p4 mechanisms of determining the client are used: environment variable `P4CLIENT`, a file referenced by `P4CONFIG`, or the local host name. Branch detection ---------------- P4 does not have the same concept of a branch as Git. Instead, p4 organizes its content as a directory tree, where by convention different logical branches are in different locations in the tree. The `p4 branch` command is used to maintain mappings between different areas in the tree, and indicate related content. `git p4` can use these mappings to determine branch relationships. If you have a repository where all the branches of interest exist as subdirectories of a single depot path, you can use `--detect-branches` when cloning or syncing to have `git p4` automatically find subdirectories in p4, and to generate these as branches in Git. For example, if the P4 repository structure is: ``` //depot/main/... //depot/branch1/... ``` And "p4 branch -o branch1" shows a View line that looks like: ``` //depot/main/... //depot/branch1/... ``` Then this `git p4 clone` command: ``` git p4 clone --detect-branches //depot@all ``` produces a separate branch in `refs/remotes/p4/` for //depot/main, called `master`, and one for //depot/branch1 called `depot/branch1`. However, it is not necessary to create branches in p4 to be able to use them like branches. Because it is difficult to infer branch relationships automatically, a Git configuration setting `git-p4.branchList` can be used to explicitly identify branch relationships. It is a list of "source:destination" pairs, like a simple p4 branch specification, where the "source" and "destination" are the path elements in the p4 repository. The example above relied on the presence of the p4 branch. Without p4 branches, the same result will occur with: ``` git init depot cd depot git config git-p4.branchList main:branch1 git p4 clone --detect-branches //depot@all . ``` Performance ----------- The fast-import mechanism used by `git p4` creates one pack file for each invocation of `git p4 sync`. Normally, Git garbage compression ([git-gc[1]](git-gc)) automatically compresses these to fewer pack files, but explicit invocation of `git repack -adf` may improve performance. Configuration variables ----------------------- The following config settings can be used to modify `git p4` behavior. They all are in the `git-p4` section. ### General variables git-p4.user User specified as an option to all p4 commands, with `-u <user>`. The environment variable `P4USER` can be used instead. git-p4.password Password specified as an option to all p4 commands, with `-P <password>`. The environment variable `P4PASS` can be used instead. git-p4.port Port specified as an option to all p4 commands, with `-p <port>`. The environment variable `P4PORT` can be used instead. git-p4.host Host specified as an option to all p4 commands, with `-h <host>`. The environment variable `P4HOST` can be used instead. git-p4.client Client specified as an option to all p4 commands, with `-c <client>`, including the client spec. git-p4.retries Specifies the number of times to retry a p4 command (notably, `p4 sync`) if the network times out. The default value is 3. Set the value to 0 to disable retries or if your p4 version does not support retries (pre 2012.2). ### Clone and sync variables git-p4.syncFromOrigin Because importing commits from other Git repositories is much faster than importing them from p4, a mechanism exists to find p4 changes first in Git remotes. If branches exist under `refs/remote/origin/p4`, those will be fetched and used when syncing from p4. This variable can be set to `false` to disable this behavior. git-p4.branchUser One phase in branch detection involves looking at p4 branches to find new ones to import. By default, all branches are inspected. This option limits the search to just those owned by the single user named in the variable. git-p4.branchList List of branches to be imported when branch detection is enabled. Each entry should be a pair of branch names separated by a colon (:). This example declares that both branchA and branchB were created from main: ``` git config git-p4.branchList main:branchA git config --add git-p4.branchList main:branchB ``` git-p4.ignoredP4Labels List of p4 labels to ignore. This is built automatically as unimportable labels are discovered. git-p4.importLabels Import p4 labels into git, as per --import-labels. git-p4.labelImportRegexp Only p4 labels matching this regular expression will be imported. The default value is `[a-zA-Z0-9_\-.]+$`. git-p4.useClientSpec Specify that the p4 client spec should be used to identify p4 depot paths of interest. This is equivalent to specifying the option `--use-client-spec`. See the "CLIENT SPEC" section above. This variable is a boolean, not the name of a p4 client. git-p4.pathEncoding Perforce keeps the encoding of a path as given by the originating OS. Git expects paths encoded as UTF-8. Use this config to tell git-p4 what encoding Perforce had used for the paths. This encoding is used to transcode the paths to UTF-8. As an example, Perforce on Windows often uses "cp1252" to encode path names. If this option is passed into a p4 clone request, it is persisted in the resulting new git repo. git-p4.metadataDecodingStrategy Perforce keeps the encoding of a changelist descriptions and user full names as stored by the client on a given OS. The p4v client uses the OS-local encoding, and so different users can end up storing different changelist descriptions or user full names in different encodings, in the same depot. Git tolerates inconsistent/incorrect encodings in commit messages and author names, but expects them to be specified in utf-8. git-p4 can use three different decoding strategies in handling the encoding uncertainty in Perforce: `passthrough` simply passes the original bytes through from Perforce to git, creating usable but incorrectly-encoded data when the Perforce data is encoded as anything other than utf-8. `strict` expects the Perforce data to be encoded as utf-8, and fails to import when this is not true. `fallback` attempts to interpret the data as utf-8, and otherwise falls back to using a secondary encoding - by default the common windows encoding `cp-1252` - with upper-range bytes escaped if decoding with the fallback encoding also fails. Under python2 the default strategy is `passthrough` for historical reasons, and under python3 the default is `fallback`. When `strict` is selected and decoding fails, the error message will propose changing this config parameter as a workaround. If this option is passed into a p4 clone request, it is persisted into the resulting new git repo. git-p4.metadataFallbackEncoding Specify the fallback encoding to use when decoding Perforce author names and changelists descriptions using the `fallback` strategy (see git-p4.metadataDecodingStrategy). The fallback encoding will only be used when decoding as utf-8 fails. This option defaults to cp1252, a common windows encoding. If this option is passed into a p4 clone request, it is persisted into the resulting new git repo. git-p4.largeFileSystem Specify the system that is used for large (binary) files. Please note that large file systems do not support the `git p4 submit` command. Only Git LFS is implemented right now (see <https://git-lfs.github.com/> for more information). Download and install the Git LFS command line extension to use this option and configure it like this: ``` git config git-p4.largeFileSystem GitLFS ``` git-p4.largeFileExtensions All files matching a file extension in the list will be processed by the large file system. Do not prefix the extensions with `.`. git-p4.largeFileThreshold All files with an uncompressed size exceeding the threshold will be processed by the large file system. By default the threshold is defined in bytes. Add the suffix k, m, or g to change the unit. git-p4.largeFileCompressedThreshold All files with a compressed size exceeding the threshold will be processed by the large file system. This option might slow down your clone/sync process. By default the threshold is defined in bytes. Add the suffix k, m, or g to change the unit. git-p4.largeFilePush Boolean variable which defines if large files are automatically pushed to a server. git-p4.keepEmptyCommits A changelist that contains only excluded files will be imported as an empty commit if this boolean option is set to true. git-p4.mapUser Map a P4 user to a name and email address in Git. Use a string with the following format to create a mapping: ``` git config --add git-p4.mapUser "p4user = First Last <[email protected]>" ``` A mapping will override any user information from P4. Mappings for multiple P4 user can be defined. ### Submit variables git-p4.detectRenames Detect renames. See [git-diff[1]](git-diff). This can be true, false, or a score as expected by `git diff -M`. git-p4.detectCopies Detect copies. See [git-diff[1]](git-diff). This can be true, false, or a score as expected by `git diff -C`. git-p4.detectCopiesHarder Detect copies harder. See [git-diff[1]](git-diff). A boolean. git-p4.preserveUser On submit, re-author changes to reflect the Git author, regardless of who invokes `git p4 submit`. git-p4.allowMissingP4Users When `preserveUser` is true, `git p4` normally dies if it cannot find an author in the p4 user map. This setting submits the change regardless. git-p4.skipSubmitEdit The submit process invokes the editor before each p4 change is submitted. If this setting is true, though, the editing step is skipped. git-p4.skipSubmitEditCheck After editing the p4 change message, `git p4` makes sure that the description really was changed by looking at the file modification time. This option disables that test. git-p4.allowSubmit By default, any branch can be used as the source for a `git p4 submit` operation. This configuration variable, if set, permits only the named branches to be used as submit sources. Branch names must be the short names (no "refs/heads/"), and should be separated by commas (","), with no spaces. git-p4.skipUserNameCheck If the user running `git p4 submit` does not exist in the p4 user map, `git p4` exits. This option can be used to force submission regardless. git-p4.attemptRCSCleanup If enabled, `git p4 submit` will attempt to cleanup RCS keywords ($Header$, etc). These would otherwise cause merge conflicts and prevent the submit going ahead. This option should be considered experimental at present. git-p4.exportLabels Export Git tags to p4 labels, as per --export-labels. git-p4.labelExportRegexp Only p4 labels matching this regular expression will be exported. The default value is `[a-zA-Z0-9_\-.]+$`. git-p4.conflict Specify submit behavior when a conflict with p4 is found, as per --conflict. The default behavior is `ask`. git-p4.disableRebase Do not rebase the tree against p4/master following a submit. git-p4.disableP4Sync Do not sync p4/master with Perforce following a submit. Implies git-p4.disableRebase. Implementation details ---------------------- * Changesets from p4 are imported using Git fast-import. * Cloning or syncing does not require a p4 client; file contents are collected using `p4 print`. * Submitting requires a p4 client, which is not in the same location as the Git repository. Patches are applied, one at a time, to this p4 client and submitted from there. * Each commit imported by `git p4` has a line at the end of the log message indicating the p4 depot location and change number. This line is used by later `git p4 sync` operations to know which p4 changes are new.
programming_docs
git git-read-tree git-read-tree ============= Name ---- git-read-tree - Reads tree information into the index Synopsis -------- ``` git read-tree [(-m [--trivial] [--aggressive] | --reset | --prefix=<prefix>) [-u | -i]] [--index-output=<file>] [--no-sparse-checkout] (--empty | <tree-ish1> [<tree-ish2> [<tree-ish3>]]) ``` Description ----------- Reads the tree information given by <tree-ish> into the index, but does not actually **update** any of the files it "caches". (see: [git-checkout-index[1]](git-checkout-index)) Optionally, it can merge a tree into the index, perform a fast-forward (i.e. 2-way) merge, or a 3-way merge, with the `-m` flag. When used with `-m`, the `-u` flag causes it to also update the files in the work tree with the result of the merge. Trivial merges are done by `git read-tree` itself. Only conflicting paths will be in unmerged state when `git read-tree` returns. Options ------- -m Perform a merge, not just a read. The command will refuse to run if your index file has unmerged entries, indicating that you have not finished previous merge you started. --reset Same as -m, except that unmerged entries are discarded instead of failing. When used with `-u`, updates leading to loss of working tree changes or untracked files or directories will not abort the operation. -u After a successful merge, update the files in the work tree with the result of the merge. -i Usually a merge requires the index file as well as the files in the working tree to be up to date with the current head commit, in order not to lose local changes. This flag disables the check with the working tree and is meant to be used when creating a merge of trees that are not directly related to the current working tree status into a temporary index file. -n --dry-run Check if the command would error out, without updating the index or the files in the working tree for real. -v Show the progress of checking files out. --trivial Restrict three-way merge by `git read-tree` to happen only if there is no file-level merging required, instead of resolving merge for trivial cases and leaving conflicting files unresolved in the index. --aggressive Usually a three-way merge by `git read-tree` resolves the merge for really trivial cases and leaves other cases unresolved in the index, so that porcelains can implement different merge policies. This flag makes the command resolve a few more cases internally: * when one side removes a path and the other side leaves the path unmodified. The resolution is to remove that path. * when both sides remove a path. The resolution is to remove that path. * when both sides add a path identically. The resolution is to add that path. --prefix=<prefix> Keep the current index contents, and read the contents of the named tree-ish under the directory at `<prefix>`. The command will refuse to overwrite entries that already existed in the original index file. --index-output=<file> Instead of writing the results out to `$GIT_INDEX_FILE`, write the resulting index in the named file. While the command is operating, the original index file is locked with the same mechanism as usual. The file must allow to be rename(2)ed into from a temporary file that is created next to the usual index file; typically this means it needs to be on the same filesystem as the index file itself, and you need write permission to the directories the index file and index output file are located in. --[no-]recurse-submodules Using --recurse-submodules will update the content of all active submodules according to the commit recorded in the superproject by calling read-tree recursively, also setting the submodules' HEAD to be detached at that commit. --no-sparse-checkout Disable sparse checkout support even if `core.sparseCheckout` is true. --empty Instead of reading tree object(s) into the index, just empty it. -q --quiet Quiet, suppress feedback messages. <tree-ish#> The id of the tree object(s) to be read/merged. Merging ------- If `-m` is specified, `git read-tree` can perform 3 kinds of merge, a single tree merge if only 1 tree is given, a fast-forward merge with 2 trees, or a 3-way merge if 3 or more trees are provided. ### Single Tree Merge If only 1 tree is specified, `git read-tree` operates as if the user did not specify `-m`, except that if the original index has an entry for a given pathname, and the contents of the path match with the tree being read, the stat info from the index is used. (In other words, the index’s stat()s take precedence over the merged tree’s). That means that if you do a `git read-tree -m <newtree>` followed by a `git checkout-index -f -u -a`, the `git checkout-index` only checks out the stuff that really changed. This is used to avoid unnecessary false hits when `git diff-files` is run after `git read-tree`. ### Two Tree Merge Typically, this is invoked as `git read-tree -m $H $M`, where $H is the head commit of the current repository, and $M is the head of a foreign tree, which is simply ahead of $H (i.e. we are in a fast-forward situation). When two trees are specified, the user is telling `git read-tree` the following: 1. The current index and work tree is derived from $H, but the user may have local changes in them since $H. 2. The user wants to fast-forward to $M. In this case, the `git read-tree -m $H $M` command makes sure that no local change is lost as the result of this "merge". Here are the "carry forward" rules, where "I" denotes the index, "clean" means that index and work tree coincide, and "exists"/"nothing" refer to the presence of a path in the specified commit: ``` I H M Result ------------------------------------------------------- 0 nothing nothing nothing (does not happen) 1 nothing nothing exists use M 2 nothing exists nothing remove path from index 3 nothing exists exists, use M if "initial checkout", H == M keep index otherwise exists, fail H != M clean I==H I==M ------------------ 4 yes N/A N/A nothing nothing keep index 5 no N/A N/A nothing nothing keep index 6 yes N/A yes nothing exists keep index 7 no N/A yes nothing exists keep index 8 yes N/A no nothing exists fail 9 no N/A no nothing exists fail 10 yes yes N/A exists nothing remove path from index 11 no yes N/A exists nothing fail 12 yes no N/A exists nothing fail 13 no no N/A exists nothing fail clean (H==M) ------ 14 yes exists exists keep index 15 no exists exists keep index clean I==H I==M (H!=M) ------------------ 16 yes no no exists exists fail 17 no no no exists exists fail 18 yes no yes exists exists keep index 19 no no yes exists exists keep index 20 yes yes no exists exists use M 21 no yes no exists exists fail ``` In all "keep index" cases, the index entry stays as in the original index file. If the entry is not up to date, `git read-tree` keeps the copy in the work tree intact when operating under the -u flag. When this form of `git read-tree` returns successfully, you can see which of the "local changes" that you made were carried forward by running `git diff-index --cached $M`. Note that this does not necessarily match what `git diff-index --cached $H` would have produced before such a two tree merge. This is because of cases 18 and 19 --- if you already had the changes in $M (e.g. maybe you picked it up via e-mail in a patch form), `git diff-index --cached $H` would have told you about the change before this merge, but it would not show in `git diff-index --cached $M` output after the two-tree merge. Case 3 is slightly tricky and needs explanation. The result from this rule logically should be to remove the path if the user staged the removal of the path and then switching to a new branch. That however will prevent the initial checkout from happening, so the rule is modified to use M (new tree) only when the content of the index is empty. Otherwise the removal of the path is kept as long as $H and $M are the same. ### 3-Way Merge Each "index" entry has two bits worth of "stage" state. stage 0 is the normal one, and is the only one you’d see in any kind of normal use. However, when you do `git read-tree` with three trees, the "stage" starts out at 1. This means that you can do ``` $ git read-tree -m <tree1> <tree2> <tree3> ``` and you will end up with an index with all of the <tree1> entries in "stage1", all of the <tree2> entries in "stage2" and all of the <tree3> entries in "stage3". When performing a merge of another branch into the current branch, we use the common ancestor tree as <tree1>, the current branch head as <tree2>, and the other branch head as <tree3>. Furthermore, `git read-tree` has special-case logic that says: if you see a file that matches in all respects in the following states, it "collapses" back to "stage0": * stage 2 and 3 are the same; take one or the other (it makes no difference - the same work has been done on our branch in stage 2 and their branch in stage 3) * stage 1 and stage 2 are the same and stage 3 is different; take stage 3 (our branch in stage 2 did not do anything since the ancestor in stage 1 while their branch in stage 3 worked on it) * stage 1 and stage 3 are the same and stage 2 is different take stage 2 (we did something while they did nothing) The `git write-tree` command refuses to write a nonsensical tree, and it will complain about unmerged entries if it sees a single entry that is not stage 0. OK, this all sounds like a collection of totally nonsensical rules, but it’s actually exactly what you want in order to do a fast merge. The different stages represent the "result tree" (stage 0, aka "merged"), the original tree (stage 1, aka "orig"), and the two trees you are trying to merge (stage 2 and 3 respectively). The order of stages 1, 2 and 3 (hence the order of three <tree-ish> command-line arguments) are significant when you start a 3-way merge with an index file that is already populated. Here is an outline of how the algorithm works: * if a file exists in identical format in all three trees, it will automatically collapse to "merged" state by `git read-tree`. * a file that has `any` difference what-so-ever in the three trees will stay as separate entries in the index. It’s up to "porcelain policy" to determine how to remove the non-0 stages, and insert a merged version. * the index file saves and restores with all this information, so you can merge things incrementally, but as long as it has entries in stages 1/2/3 (i.e., "unmerged entries") you can’t write the result. So now the merge algorithm ends up being really simple: + you walk the index in order, and ignore all entries of stage 0, since they’ve already been done. + if you find a "stage1", but no matching "stage2" or "stage3", you know it’s been removed from both trees (it only existed in the original tree), and you remove that entry. + if you find a matching "stage2" and "stage3" tree, you remove one of them, and turn the other into a "stage0" entry. Remove any matching "stage1" entry if it exists too. .. all the normal trivial rules .. You would normally use `git merge-index` with supplied `git merge-one-file` to do this last step. The script updates the files in the working tree as it merges each path and at the end of a successful merge. When you start a 3-way merge with an index file that is already populated, it is assumed that it represents the state of the files in your work tree, and you can even have files with changes unrecorded in the index file. It is further assumed that this state is "derived" from the stage 2 tree. The 3-way merge refuses to run if it finds an entry in the original index file that does not match stage 2. This is done to prevent you from losing your work-in-progress changes, and mixing your random changes in an unrelated merge commit. To illustrate, suppose you start from what has been committed last to your repository: ``` $ JC=`git rev-parse --verify "HEAD^0"` $ git checkout-index -f -u -a $JC ``` You do random edits, without running `git update-index`. And then you notice that the tip of your "upstream" tree has advanced since you pulled from him: ``` $ git fetch git://.... linus $ LT=`git rev-parse FETCH_HEAD` ``` Your work tree is still based on your HEAD ($JC), but you have some edits since. Three-way merge makes sure that you have not added or modified index entries since $JC, and if you haven’t, then does the right thing. So with the following sequence: ``` $ git read-tree -m -u `git merge-base $JC $LT` $JC $LT $ git merge-index git-merge-one-file -a $ echo "Merge with Linus" | \ git commit-tree `git write-tree` -p $JC -p $LT ``` what you would commit is a pure merge between $JC and $LT without your work-in-progress changes, and your work tree would be updated to the result of the merge. However, if you have local changes in the working tree that would be overwritten by this merge, `git read-tree` will refuse to run to prevent your changes from being lost. In other words, there is no need to worry about what exists only in the working tree. When you have local changes in a part of the project that is not involved in the merge, your changes do not interfere with the merge, and are kept intact. When they **do** interfere, the merge does not even start (`git read-tree` complains loudly and fails without modifying anything). In such a case, you can simply continue doing what you were in the middle of doing, and when your working tree is ready (i.e. you have finished your work-in-progress), attempt the merge again. Sparse checkout --------------- Note: The skip-worktree capabilities in [git-update-index[1]](git-update-index) and `read-tree` predated the introduction of [git-sparse-checkout[1]](git-sparse-checkout). Users are encouraged to use the `sparse-checkout` command in preference to these plumbing commands for sparse-checkout/skip-worktree related needs. However, the information below might be useful to users trying to understand the pattern style used in non-cone mode of the `sparse-checkout` command. "Sparse checkout" allows populating the working directory sparsely. It uses the skip-worktree bit (see [git-update-index[1]](git-update-index)) to tell Git whether a file in the working directory is worth looking at. `git read-tree` and other merge-based commands (`git merge`, `git checkout`…​) can help maintaining the skip-worktree bitmap and working directory update. `$GIT_DIR/info/sparse-checkout` is used to define the skip-worktree reference bitmap. When `git read-tree` needs to update the working directory, it resets the skip-worktree bit in the index based on this file, which uses the same syntax as .gitignore files. If an entry matches a pattern in this file, or the entry corresponds to a file present in the working tree, then skip-worktree will not be set on that entry. Otherwise, skip-worktree will be set. Then it compares the new skip-worktree value with the previous one. If skip-worktree turns from set to unset, it will add the corresponding file back. If it turns from unset to set, that file will be removed. While `$GIT_DIR/info/sparse-checkout` is usually used to specify what files are in, you can also specify what files are `not` in, using negate patterns. For example, to remove the file `unwanted`: ``` /* !unwanted ``` Another tricky thing is fully repopulating the working directory when you no longer want sparse checkout. You cannot just disable "sparse checkout" because skip-worktree bits are still in the index and your working directory is still sparsely populated. You should re-populate the working directory with the `$GIT_DIR/info/sparse-checkout` file content as follows: ``` /* ``` Then you can disable sparse checkout. Sparse checkout support in `git read-tree` and similar commands is disabled by default. You need to turn `core.sparseCheckout` on in order to have sparse checkout support. See also -------- [git-write-tree[1]](git-write-tree), [git-ls-files[1]](git-ls-files), [gitignore[5]](gitignore), [git-sparse-checkout[1]](git-sparse-checkout) git git-interpret-trailers git-interpret-trailers ====================== Name ---- git-interpret-trailers - Add or parse structured information in commit messages Synopsis -------- ``` git interpret-trailers [--in-place] [--trim-empty] [(--trailer <token>[(=|:)<value>])…​] [--parse] [<file>…​] ``` Description ----------- Help parsing or adding `trailers` lines, that look similar to RFC 822 e-mail headers, at the end of the otherwise free-form part of a commit message. This command reads some patches or commit messages from either the <file> arguments or the standard input if no <file> is specified. If `--parse` is specified, the output consists of the parsed trailers. Otherwise, this command applies the arguments passed using the `--trailer` option, if any, to the commit message part of each input file. The result is emitted on the standard output. Some configuration variables control the way the `--trailer` arguments are applied to each commit message and the way any existing trailer in the commit message is changed. They also make it possible to automatically add some trailers. By default, a `<token>=<value>` or `<token>:<value>` argument given using `--trailer` will be appended after the existing trailers only if the last trailer has a different (<token>, <value>) pair (or if there is no existing trailer). The <token> and <value> parts will be trimmed to remove starting and trailing whitespace, and the resulting trimmed <token> and <value> will appear in the message like this: ``` token: value ``` This means that the trimmed <token> and <value> will be separated by `': '` (one colon followed by one space). By default the new trailer will appear at the end of all the existing trailers. If there is no existing trailer, the new trailer will appear after the commit message part of the output, and, if there is no line with only spaces at the end of the commit message part, one blank line will be added before the new trailer. Existing trailers are extracted from the input message by looking for a group of one or more lines that (i) is all trailers, or (ii) contains at least one Git-generated or user-configured trailer and consists of at least 25% trailers. The group must be preceded by one or more empty (or whitespace-only) lines. The group must either be at the end of the message or be the last non-whitespace lines before a line that starts with `---` (followed by a space or the end of the line). Such three minus signs start the patch part of the message. See also `--no-divider` below. When reading trailers, there can be no whitespace before or inside the token, but any number of regular space and tab characters are allowed between the token and the separator. There can be whitespaces before, inside or after the value. The value may be split over multiple lines with each subsequent line starting with at least one whitespace, like the "folding" in RFC 822. Note that `trailers` do not follow and are not intended to follow many rules for RFC 822 headers. For example they do not follow the encoding rules and probably many other rules. Options ------- --in-place Edit the files in place. --trim-empty If the <value> part of any trailer contains only whitespace, the whole trailer will be removed from the resulting message. This applies to existing trailers as well as new trailers. --trailer <token>[(=|:)<value>] Specify a (<token>, <value>) pair that should be applied as a trailer to the input messages. See the description of this command. --where <placement> --no-where Specify where all new trailers will be added. A setting provided with `--where` overrides all configuration variables and applies to all `--trailer` options until the next occurrence of `--where` or `--no-where`. Possible values are `after`, `before`, `end` or `start`. --if-exists <action> --no-if-exists Specify what action will be performed when there is already at least one trailer with the same <token> in the message. A setting provided with `--if-exists` overrides all configuration variables and applies to all `--trailer` options until the next occurrence of `--if-exists` or `--no-if-exists`. Possible actions are `addIfDifferent`, `addIfDifferentNeighbor`, `add`, `replace` and `doNothing`. --if-missing <action> --no-if-missing Specify what action will be performed when there is no other trailer with the same <token> in the message. A setting provided with `--if-missing` overrides all configuration variables and applies to all `--trailer` options until the next occurrence of `--if-missing` or `--no-if-missing`. Possible actions are `doNothing` or `add`. --only-trailers Output only the trailers, not any other parts of the input. --only-input Output only trailers that exist in the input; do not add any from the command-line or by following configured `trailer.*` rules. --unfold Remove any whitespace-continuation in trailers, so that each trailer appears on a line by itself with its full content. --parse A convenience alias for `--only-trailers --only-input --unfold`. --no-divider Do not treat `---` as the end of the commit message. Use this when you know your input contains just the commit message itself (and not an email or the output of `git format-patch`). Configuration variables ----------------------- trailer.separators This option tells which characters are recognized as trailer separators. By default only `:` is recognized as a trailer separator, except that `=` is always accepted on the command line for compatibility with other git commands. The first character given by this option will be the default character used when another separator is not specified in the config for this trailer. For example, if the value for this option is "%=$", then only lines using the format `<token><sep><value>` with <sep> containing `%`, `=` or `$` and then spaces will be considered trailers. And `%` will be the default separator used, so by default trailers will appear like: `<token>% <value>` (one percent sign and one space will appear between the token and the value). trailer.where This option tells where a new trailer will be added. This can be `end`, which is the default, `start`, `after` or `before`. If it is `end`, then each new trailer will appear at the end of the existing trailers. If it is `start`, then each new trailer will appear at the start, instead of the end, of the existing trailers. If it is `after`, then each new trailer will appear just after the last trailer with the same <token>. If it is `before`, then each new trailer will appear just before the first trailer with the same <token>. trailer.ifexists This option makes it possible to choose what action will be performed when there is already at least one trailer with the same <token> in the message. The valid values for this option are: `addIfDifferentNeighbor` (this is the default), `addIfDifferent`, `add`, `replace` or `doNothing`. With `addIfDifferentNeighbor`, a new trailer will be added only if no trailer with the same (<token>, <value>) pair is above or below the line where the new trailer will be added. With `addIfDifferent`, a new trailer will be added only if no trailer with the same (<token>, <value>) pair is already in the message. With `add`, a new trailer will be added, even if some trailers with the same (<token>, <value>) pair are already in the message. With `replace`, an existing trailer with the same <token> will be deleted and the new trailer will be added. The deleted trailer will be the closest one (with the same <token>) to the place where the new one will be added. With `doNothing`, nothing will be done; that is no new trailer will be added if there is already one with the same <token> in the message. trailer.ifmissing This option makes it possible to choose what action will be performed when there is not yet any trailer with the same <token> in the message. The valid values for this option are: `add` (this is the default) and `doNothing`. With `add`, a new trailer will be added. With `doNothing`, nothing will be done. trailer.<token>.key This `key` will be used instead of <token> in the trailer. At the end of this key, a separator can appear and then some space characters. By default the only valid separator is `:`, but this can be changed using the `trailer.separators` config variable. If there is a separator, then the key will be used instead of both the <token> and the default separator when adding the trailer. trailer.<token>.where This option takes the same values as the `trailer.where` configuration variable and it overrides what is specified by that option for trailers with the specified <token>. trailer.<token>.ifexists This option takes the same values as the `trailer.ifexists` configuration variable and it overrides what is specified by that option for trailers with the specified <token>. trailer.<token>.ifmissing This option takes the same values as the `trailer.ifmissing` configuration variable and it overrides what is specified by that option for trailers with the specified <token>. trailer.<token>.command This option behaves in the same way as `trailer.<token>.cmd`, except that it doesn’t pass anything as argument to the specified command. Instead the first occurrence of substring $ARG is replaced by the value that would be passed as argument. The `trailer.<token>.command` option has been deprecated in favor of `trailer.<token>.cmd` due to the fact that $ARG in the user’s command is only replaced once and that the original way of replacing $ARG is not safe. When both `trailer.<token>.cmd` and `trailer.<token>.command` are given for the same <token>, `trailer.<token>.cmd` is used and `trailer.<token>.command` is ignored. trailer.<token>.cmd This option can be used to specify a shell command that will be called: once to automatically add a trailer with the specified <token>, and then each time a `--trailer <token>=<value>` argument to modify the <value> of the trailer that this option would produce. When the specified command is first called to add a trailer with the specified <token>, the behavior is as if a special `--trailer <token>=<value>` argument was added at the beginning of the "git interpret-trailers" command, where <value> is taken to be the standard output of the command with any leading and trailing whitespace trimmed off. If some `--trailer <token>=<value>` arguments are also passed on the command line, the command is called again once for each of these arguments with the same <token>. And the <value> part of these arguments, if any, will be passed to the command as its first argument. This way the command can produce a <value> computed from the <value> passed in the `--trailer <token>=<value>` argument. Examples -------- * Configure a `sign` trailer with a `Signed-off-by` key, and then add two of these trailers to a message: ``` $ git config trailer.sign.key "Signed-off-by" $ cat msg.txt subject message $ cat msg.txt | git interpret-trailers --trailer 'sign: Alice <[email protected]>' --trailer 'sign: Bob <[email protected]>' subject message Signed-off-by: Alice <[email protected]> Signed-off-by: Bob <[email protected]> ``` * Use the `--in-place` option to edit a message file in place: ``` $ cat msg.txt subject message Signed-off-by: Bob <[email protected]> $ git interpret-trailers --trailer 'Acked-by: Alice <[email protected]>' --in-place msg.txt $ cat msg.txt subject message Signed-off-by: Bob <[email protected]> Acked-by: Alice <[email protected]> ``` * Extract the last commit as a patch, and add a `Cc` and a `Reviewed-by` trailer to it: ``` $ git format-patch -1 0001-foo.patch $ git interpret-trailers --trailer 'Cc: Alice <[email protected]>' --trailer 'Reviewed-by: Bob <[email protected]>' 0001-foo.patch >0001-bar.patch ``` * Configure a `sign` trailer with a command to automatically add a 'Signed-off-by: ' with the author information only if there is no 'Signed-off-by: ' already, and show how it works: ``` $ git config trailer.sign.key "Signed-off-by: " $ git config trailer.sign.ifmissing add $ git config trailer.sign.ifexists doNothing $ git config trailer.sign.command 'echo "$(git config user.name) <$(git config user.email)>"' $ git interpret-trailers <<EOF > EOF Signed-off-by: Bob <[email protected]> $ git interpret-trailers <<EOF > Signed-off-by: Alice <[email protected]> > EOF Signed-off-by: Alice <[email protected]> ``` * Configure a `fix` trailer with a key that contains a `#` and no space after this character, and show how it works: ``` $ git config trailer.separators ":#" $ git config trailer.fix.key "Fix #" $ echo "subject" | git interpret-trailers --trailer fix=42 subject Fix #42 ``` * Configure a `help` trailer with a cmd use a script `glog-find-author` which search specified author identity from git log in git repository and show how it works: ``` $ cat ~/bin/glog-find-author #!/bin/sh test -n "$1" && git log --author="$1" --pretty="%an <%ae>" -1 || true $ git config trailer.help.key "Helped-by: " $ git config trailer.help.ifExists "addIfDifferentNeighbor" $ git config trailer.help.cmd "~/bin/glog-find-author" $ git interpret-trailers --trailer="help:Junio" --trailer="help:Couder" <<EOF > subject > > message > > EOF subject message Helped-by: Junio C Hamano <[email protected]> Helped-by: Christian Couder <[email protected]> ``` * Configure a `ref` trailer with a cmd use a script `glog-grep` to grep last relevant commit from git log in the git repository and show how it works: ``` $ cat ~/bin/glog-grep #!/bin/sh test -n "$1" && git log --grep "$1" --pretty=reference -1 || true $ git config trailer.ref.key "Reference-to: " $ git config trailer.ref.ifExists "replace" $ git config trailer.ref.cmd "~/bin/glog-grep" $ git interpret-trailers --trailer="ref:Add copyright notices." <<EOF > subject > > message > > EOF subject message Reference-to: 8bc9a0c769 (Add copyright notices., 2005-04-07) ``` * Configure a `see` trailer with a command to show the subject of a commit that is related, and show how it works: ``` $ git config trailer.see.key "See-also: " $ git config trailer.see.ifExists "replace" $ git config trailer.see.ifMissing "doNothing" $ git config trailer.see.command "git log -1 --oneline --format=\"%h (%s)\" --abbrev-commit --abbrev=14 \$ARG" $ git interpret-trailers <<EOF > subject > > message > > see: HEAD~2 > EOF subject message See-also: fe3187489d69c4 (subject of related commit) ``` * Configure a commit template with some trailers with empty values (using sed to show and keep the trailing spaces at the end of the trailers), then configure a commit-msg hook that uses `git interpret-trailers` to remove trailers with empty values and to add a `git-version` trailer: ``` $ sed -e 's/ Z$/ /' >commit_template.txt <<EOF > ***subject*** > > ***message*** > > Fixes: Z > Cc: Z > Reviewed-by: Z > Signed-off-by: Z > EOF $ git config commit.template commit_template.txt $ cat >.git/hooks/commit-msg <<EOF > #!/bin/sh > git interpret-trailers --trim-empty --trailer "git-version: \$(git describe)" "\$1" > "\$1.new" > mv "\$1.new" "\$1" > EOF $ chmod +x .git/hooks/commit-msg ``` See also -------- [git-commit[1]](git-commit), [git-format-patch[1]](git-format-patch), [git-config[1]](git-config)
programming_docs
git git-shell git-shell ========= Name ---- git-shell - Restricted login shell for Git-only SSH access Synopsis -------- ``` chsh -s $(command -v git-shell) <user> git clone <user>@localhost:/path/to/repo.git ssh <user>@localhost ``` Description ----------- This is a login shell for SSH accounts to provide restricted Git access. It permits execution only of server-side Git commands implementing the pull/push functionality, plus custom commands present in a subdirectory named `git-shell-commands` in the user’s home directory. Commands -------- `git shell` accepts the following commands after the `-c` option: *git receive-pack <argument>* *git upload-pack <argument>* *git upload-archive <argument>* Call the corresponding server-side command to support the client’s `git push`, `git fetch`, or `git archive --remote` request. *cvs server* Imitate a CVS server. See [git-cvsserver[1]](git-cvsserver). If a `~/git-shell-commands` directory is present, `git shell` will also handle other, custom commands by running "`git-shell-commands/<command> <arguments>`" from the user’s home directory. Interactive use --------------- By default, the commands above can be executed only with the `-c` option; the shell is not interactive. If a `~/git-shell-commands` directory is present, `git shell` can also be run interactively (with no arguments). If a `help` command is present in the `git-shell-commands` directory, it is run to provide the user with an overview of allowed actions. Then a "git> " prompt is presented at which one can enter any of the commands from the `git-shell-commands` directory, or `exit` to close the connection. Generally this mode is used as an administrative interface to allow users to list repositories they have access to, create, delete, or rename repositories, or change repository descriptions and permissions. If a `no-interactive-login` command exists, then it is run and the interactive shell is aborted. Examples -------- To disable interactive logins, displaying a greeting instead: ``` $ chsh -s /usr/bin/git-shell $ mkdir $HOME/git-shell-commands $ cat >$HOME/git-shell-commands/no-interactive-login <<\EOF #!/bin/sh printf '%s\n' "Hi $USER! You've successfully authenticated, but I do not" printf '%s\n' "provide interactive shell access." exit 128 EOF $ chmod +x $HOME/git-shell-commands/no-interactive-login ``` To enable git-cvsserver access (which should generally have the `no-interactive-login` example above as a prerequisite, as creating the git-shell-commands directory allows interactive logins): ``` $ cat >$HOME/git-shell-commands/cvs <<\EOF if ! test $# = 1 && test "$1" = "server" then echo >&2 "git-cvsserver only handles \"server\"" exit 1 fi exec git cvsserver server EOF $ chmod +x $HOME/git-shell-commands/cvs ``` See also -------- ssh(1), [git-daemon[1]](git-daemon), contrib/git-shell-commands/README git api-trace2 api-trace2 ========== The Trace2 API can be used to print debug, performance, and telemetry information to stderr or a file. The Trace2 feature is inactive unless explicitly enabled by enabling one or more Trace2 Targets. The Trace2 API is intended to replace the existing (Trace1) `printf()`-style tracing provided by the existing `GIT_TRACE` and `GIT_TRACE_PERFORMANCE` facilities. During initial implementation, Trace2 and Trace1 may operate in parallel. The Trace2 API defines a set of high-level messages with known fields, such as (`start`: `argv`) and (`exit`: {`exit-code`, `elapsed-time`}). Trace2 instrumentation throughout the Git code base sends Trace2 messages to the enabled Trace2 Targets. Targets transform these messages content into purpose-specific formats and write events to their data streams. In this manner, the Trace2 API can drive many different types of analysis. Targets are defined using a VTable allowing easy extension to other formats in the future. This might be used to define a binary format, for example. Trace2 is controlled using `trace2.*` config values in the system and global config files and `GIT_TRACE2*` environment variables. Trace2 does not read from repo local or worktree config files, nor does it respect `-c` command line config settings. Trace2 targets -------------- Trace2 defines the following set of Trace2 Targets. Format details are given in a later section. ### The Normal Format Target The normal format target is a traditional `printf()` format and similar to the `GIT_TRACE` format. This format is enabled with the `GIT_TRACE2` environment variable or the `trace2.normalTarget` system or global config setting. For example ``` $ export GIT_TRACE2=~/log.normal $ git version git version 2.20.1.155.g426c96fcdb ``` or ``` $ git config --global trace2.normalTarget ~/log.normal $ git version git version 2.20.1.155.g426c96fcdb ``` yields ``` $ cat ~/log.normal 12:28:42.620009 common-main.c:38 version 2.20.1.155.g426c96fcdb 12:28:42.620989 common-main.c:39 start git version 12:28:42.621101 git.c:432 cmd_name version (version) 12:28:42.621215 git.c:662 exit elapsed:0.001227 code:0 12:28:42.621250 trace2/tr2_tgt_normal.c:124 atexit elapsed:0.001265 code:0 ``` ### The Performance Format Target The performance format target (PERF) is a column-based format to replace `GIT_TRACE_PERFORMANCE` and is suitable for development and testing, possibly to complement tools like `gprof`. This format is enabled with the `GIT_TRACE2_PERF` environment variable or the `trace2.perfTarget` system or global config setting. For example ``` $ export GIT_TRACE2_PERF=~/log.perf $ git version git version 2.20.1.155.g426c96fcdb ``` or ``` $ git config --global trace2.perfTarget ~/log.perf $ git version git version 2.20.1.155.g426c96fcdb ``` yields ``` $ cat ~/log.perf 12:28:42.620675 common-main.c:38 | d0 | main | version | | | | | 2.20.1.155.g426c96fcdb 12:28:42.621001 common-main.c:39 | d0 | main | start | | 0.001173 | | | git version 12:28:42.621111 git.c:432 | d0 | main | cmd_name | | | | | version (version) 12:28:42.621225 git.c:662 | d0 | main | exit | | 0.001227 | | | code:0 12:28:42.621259 trace2/tr2_tgt_perf.c:211 | d0 | main | atexit | | 0.001265 | | | code:0 ``` ### The Event Format Target The event format target is a JSON-based format of event data suitable for telemetry analysis. This format is enabled with the `GIT_TRACE2_EVENT` environment variable or the `trace2.eventTarget` system or global config setting. For example ``` $ export GIT_TRACE2_EVENT=~/log.event $ git version git version 2.20.1.155.g426c96fcdb ``` or ``` $ git config --global trace2.eventTarget ~/log.event $ git version git version 2.20.1.155.g426c96fcdb ``` yields ``` $ cat ~/log.event {"event":"version","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.620713Z","file":"common-main.c","line":38,"evt":"3","exe":"2.20.1.155.g426c96fcdb"} {"event":"start","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621027Z","file":"common-main.c","line":39,"t_abs":0.001173,"argv":["git","version"]} {"event":"cmd_name","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621122Z","file":"git.c","line":432,"name":"version","hierarchy":"version"} {"event":"exit","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621236Z","file":"git.c","line":662,"t_abs":0.001227,"code":0} {"event":"atexit","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621268Z","file":"trace2/tr2_tgt_event.c","line":163,"t_abs":0.001265,"code":0} ``` ### Enabling a Target To enable a target, set the corresponding environment variable or system or global config value to one of the following: * `0` or `false` - Disables the target. * `1` or `true` - Writes to `STDERR`. * `[2-9]` - Writes to the already opened file descriptor. * `<absolute-pathname>` - Writes to the file in append mode. If the target already exists and is a directory, the traces will be written to files (one per process) underneath the given directory. * `af_unix:[<socket_type>:]<absolute-pathname>` - Write to a Unix DomainSocket (on platforms that support them). Socket type can be either `stream` or `dgram`; if omitted Git will try both. When trace files are written to a target directory, they will be named according to the last component of the SID (optionally followed by a counter to avoid filename collisions). Trace2 api ---------- The Trace2 public API is defined and documented in `trace2.h`; refer to it for more information. All public functions and macros are prefixed with `trace2_` and are implemented in `trace2.c`. There are no public Trace2 data structures. The Trace2 code also defines a set of private functions and data types in the `trace2/` directory. These symbols are prefixed with `tr2_` and should only be used by functions in `trace2.c` (or other private source files in `trace2/`). ### Conventions for Public Functions and Macros Some functions have a `_fl()` suffix to indicate that they take `file` and `line-number` arguments. Some functions have a `_va_fl()` suffix to indicate that they also take a `va_list` argument. Some functions have a `_printf_fl()` suffix to indicate that they also take a `printf()` style format with a variable number of arguments. CPP wrapper macros are defined to hide most of these details. Trace2 target formats --------------------- ### NORMAL Format Events are written as lines of the form: ``` [<time> SP <filename>:<line> SP+] <event-name> [[SP] <event-message>] LF ``` `<event-name>` is the event name. `<event-message>` is a free-form `printf()` message intended for human consumption. Note that this may contain embedded LF or CRLF characters that are not escaped, so the event may spill across multiple lines. If `GIT_TRACE2_BRIEF` or `trace2.normalBrief` is true, the `time`, `filename`, and `line` fields are omitted. This target is intended to be more of a summary (like GIT\_TRACE) and less detailed than the other targets. It ignores thread, region, and data messages, for example. ### PERF Format Events are written as lines of the form: ``` [<time> SP <filename>:<line> SP+ BAR SP] d<depth> SP BAR SP <thread-name> SP+ BAR SP <event-name> SP+ BAR SP [r<repo-id>] SP+ BAR SP [<t_abs>] SP+ BAR SP [<t_rel>] SP+ BAR SP [<category>] SP+ BAR SP DOTS* <perf-event-message> LF ``` `<depth>` is the git process depth. This is the number of parent git processes. A top-level git command has depth value "d0". A child of it has depth value "d1". A second level child has depth value "d2" and so on. `<thread-name>` is a unique name for the thread. The primary thread is called "main". Other thread names are of the form "th%d:%s" and include a unique number and the name of the thread-proc. `<event-name>` is the event name. `<repo-id>` when present, is a number indicating the repository in use. A `def_repo` event is emitted when a repository is opened. This defines the repo-id and associated worktree. Subsequent repo-specific events will reference this repo-id. Currently, this is always "r1" for the main repository. This field is in anticipation of in-proc submodules in the future. `<t_abs>` when present, is the absolute time in seconds since the program started. `<t_rel>` when present, is time in seconds relative to the start of the current region. For a thread-exit event, it is the elapsed time of the thread. `<category>` is present on region and data events and is used to indicate a broad category, such as "index" or "status". `<perf-event-message>` is a free-form `printf()` message intended for human consumption. ``` 15:33:33.532712 wt-status.c:2310 | d0 | main | region_enter | r1 | 0.126064 | | status | label:print 15:33:33.532712 wt-status.c:2331 | d0 | main | region_leave | r1 | 0.127568 | 0.001504 | status | label:print ``` If `GIT_TRACE2_PERF_BRIEF` or `trace2.perfBrief` is true, the `time`, `file`, and `line` fields are omitted. ``` d0 | main | region_leave | r1 | 0.011717 | 0.009122 | index | label:preload ``` The PERF target is intended for interactive performance analysis during development and is quite noisy. ### EVENT Format Each event is a JSON-object containing multiple key/value pairs written as a single line and followed by a LF. ``` '{' <key> ':' <value> [',' <key> ':' <value>]* '}' LF ``` Some key/value pairs are common to all events and some are event-specific. #### Common Key/Value Pairs The following key/value pairs are common to all events: ``` { "event":"version", "sid":"20190408T191827.272759Z-H9b68c35f-P00003510", "thread":"main", "time":"2019-04-08T19:18:27.282761Z", "file":"common-main.c", "line":42, ... } ``` `"event":<event>` is the event name. `"sid":<sid>` is the session-id. This is a unique string to identify the process instance to allow all events emitted by a process to be identified. A session-id is used instead of a PID because PIDs are recycled by the OS. For child git processes, the session-id is prepended with the session-id of the parent git process to allow parent-child relationships to be identified during post-processing. `"thread":<thread>` is the thread name. `"time":<time>` is the UTC time of the event. `"file":<filename>` is source file generating the event. `"line":<line-number>` is the integer source line number generating the event. `"repo":<repo-id>` when present, is the integer repo-id as described previously. If `GIT_TRACE2_EVENT_BRIEF` or `trace2.eventBrief` is true, the `file` and `line` fields are omitted from all events and the `time` field is only present on the "start" and "atexit" events. #### Event-Specific Key/Value Pairs `"version"` This event gives the version of the executable and the EVENT format. It should always be the first event in a trace session. The EVENT format version will be incremented if new event types are added, if existing fields are removed, or if there are significant changes in interpretation of existing events or fields. Smaller changes, such as adding a new field to an existing event, will not require an increment to the EVENT format version. ``` { "event":"version", ... "evt":"3", # EVENT format version "exe":"2.20.1.155.g426c96fcdb" # git version } ``` `"too_many_files"` This event is written to the git-trace2-discard sentinel file if there are too many files in the target trace directory (see the trace2.maxFiles config option). ``` { "event":"too_many_files", ... } ``` `"start"` This event contains the complete argv received by main(). ``` { "event":"start", ... "t_abs":0.001227, # elapsed time in seconds "argv":["git","version"] } ``` `"exit"` This event is emitted when git calls `exit()`. ``` { "event":"exit", ... "t_abs":0.001227, # elapsed time in seconds "code":0 # exit code } ``` `"atexit"` This event is emitted by the Trace2 `atexit` routine during final shutdown. It should be the last event emitted by the process. (The elapsed time reported here is greater than the time reported in the "exit" event because it runs after all other atexit tasks have completed.) ``` { "event":"atexit", ... "t_abs":0.001227, # elapsed time in seconds "code":0 # exit code } ``` `"signal"` This event is emitted when the program is terminated by a user signal. Depending on the platform, the signal event may prevent the "atexit" event from being generated. ``` { "event":"signal", ... "t_abs":0.001227, # elapsed time in seconds "signo":13 # SIGTERM, SIGINT, etc. } ``` `"error"` This event is emitted when one of the `BUG()`, `bug()`, `error()`, `die()`, `warning()`, or `usage()` functions are called. ``` { "event":"error", ... "msg":"invalid option: --cahced", # formatted error message "fmt":"invalid option: %s" # error format string } ``` The error event may be emitted more than once. The format string allows post-processors to group errors by type without worrying about specific error arguments. `"cmd_path"` This event contains the discovered full path of the git executable (on platforms that are configured to resolve it). ``` { "event":"cmd_path", ... "path":"C:/work/gfw/git.exe" } ``` `"cmd_ancestry"` This event contains the text command name for the parent (and earlier generations of parents) of the current process, in an array ordered from nearest parent to furthest great-grandparent. It may not be implemented on all platforms. ``` { "event":"cmd_ancestry", ... "ancestry":["bash","tmux: server","systemd"] } ``` `"cmd_name"` This event contains the command name for this git process and the hierarchy of commands from parent git processes. ``` { "event":"cmd_name", ... "name":"pack-objects", "hierarchy":"push/pack-objects" } ``` Normally, the "name" field contains the canonical name of the command. When a canonical name is not available, one of these special values are used: ``` "_query_" # "git --html-path" "_run_dashed_" # when "git foo" tries to run "git-foo" "_run_shell_alias_" # alias expansion to a shell command "_run_git_alias_" # alias expansion to a git command "_usage_" # usage error ``` `"cmd_mode"` This event, when present, describes the command variant. This event may be emitted more than once. ``` { "event":"cmd_mode", ... "name":"branch" } ``` The "name" field is an arbitrary string to describe the command mode. For example, checkout can checkout a branch or an individual file. And these variations typically have different performance characteristics that are not comparable. `"alias"` This event is present when an alias is expanded. ``` { "event":"alias", ... "alias":"l", # registered alias "argv":["log","--graph"] # alias expansion } ``` `"child_start"` This event describes a child process that is about to be spawned. ``` { "event":"child_start", ... "child_id":2, "child_class":"?", "use_shell":false, "argv":["git","rev-list","--objects","--stdin","--not","--all","--quiet"] "hook_name":"<hook_name>" # present when child_class is "hook" "cd":"<path>" # present when cd is required } ``` The "child\_id" field can be used to match this child\_start with the corresponding child\_exit event. The "child\_class" field is a rough classification, such as "editor", "pager", "transport/\*", and "hook". Unclassified children are classified with "?". `"child_exit"` This event is generated after the current process has returned from the `waitpid()` and collected the exit information from the child. ``` { "event":"child_exit", ... "child_id":2, "pid":14708, # child PID "code":0, # child exit-code "t_rel":0.110605 # observed run-time of child process } ``` Note that the session-id of the child process is not available to the current/spawning process, so the child’s PID is reported here as a hint for post-processing. (But it is only a hint because the child process may be a shell script which doesn’t have a session-id.) Note that the `t_rel` field contains the observed run time in seconds for the child process (starting before the fork/exec/spawn and stopping after the `waitpid()` and includes OS process creation overhead). So this time will be slightly larger than the atexit time reported by the child process itself. `"child_ready"` This event is generated after the current process has started a background process and released all handles to it. ``` { "event":"child_ready", ... "child_id":2, "pid":14708, # child PID "ready":"ready", # child ready state "t_rel":0.110605 # observed run-time of child process } ``` Note that the session-id of the child process is not available to the current/spawning process, so the child’s PID is reported here as a hint for post-processing. (But it is only a hint because the child process may be a shell script which doesn’t have a session-id.) This event is generated after the child is started in the background and given a little time to boot up and start working. If the child starts up normally while the parent is still waiting, the "ready" field will have the value "ready". If the child is too slow to start and the parent times out, the field will have the value "timeout". If the child starts but the parent is unable to probe it, the field will have the value "error". After the parent process emits this event, it will release all of its handles to the child process and treat the child as a background daemon. So even if the child does eventually finish booting up, the parent will not emit an updated event. Note that the `t_rel` field contains the observed run time in seconds when the parent released the child process into the background. The child is assumed to be a long-running daemon process and may outlive the parent process. So the parent’s child event times should not be compared to the child’s atexit times. `"exec"` This event is generated before git attempts to `exec()` another command rather than starting a child process. ``` { "event":"exec", ... "exec_id":0, "exe":"git", "argv":["foo", "bar"] } ``` The "exec\_id" field is a command-unique id and is only useful if the `exec()` fails and a corresponding exec\_result event is generated. `"exec_result"` This event is generated if the `exec()` fails and control returns to the current git command. ``` { "event":"exec_result", ... "exec_id":0, "code":1 # error code (errno) from exec() } ``` `"thread_start"` This event is generated when a thread is started. It is generated from **within** the new thread’s thread-proc (because it needs to access data in the thread’s thread-local storage). ``` { "event":"thread_start", ... "thread":"th02:preload_thread" # thread name } ``` `"thread_exit"` This event is generated when a thread exits. It is generated from **within** the thread’s thread-proc. ``` { "event":"thread_exit", ... "thread":"th02:preload_thread", # thread name "t_rel":0.007328 # thread elapsed time } ``` `"def_param"` This event is generated to log a global parameter, such as a config setting, command-line flag, or environment variable. ``` { "event":"def_param", ... "scope":"global", "param":"core.abbrev", "value":"7" } ``` `"def_repo"` This event defines a repo-id and associates it with the root of the worktree. ``` { "event":"def_repo", ... "repo":1, "worktree":"/Users/jeffhost/work/gfw" } ``` As stated earlier, the repo-id is currently always 1, so there will only be one def\_repo event. Later, if in-proc submodules are supported, a def\_repo event should be emitted for each submodule visited. `"region_enter"` This event is generated when entering a region. ``` { "event":"region_enter", ... "repo":1, # optional "nesting":1, # current region stack depth "category":"index", # optional "label":"do_read_index", # optional "msg":".git/index" # optional } ``` The `category` field may be used in a future enhancement to do category-based filtering. `GIT_TRACE2_EVENT_NESTING` or `trace2.eventNesting` can be used to filter deeply nested regions and data events. It defaults to "2". `"region_leave"` This event is generated when leaving a region. ``` { "event":"region_leave", ... "repo":1, # optional "t_rel":0.002876, # time spent in region in seconds "nesting":1, # region stack depth "category":"index", # optional "label":"do_read_index", # optional "msg":".git/index" # optional } ``` `"data"` This event is generated to log a thread- and region-local key/value pair. ``` { "event":"data", ... "repo":1, # optional "t_abs":0.024107, # absolute elapsed time "t_rel":0.001031, # elapsed time in region/thread "nesting":2, # region stack depth "category":"index", "key":"read/cache_nr", "value":"3552" } ``` The "value" field may be an integer or a string. `"data-json"` This event is generated to log a pre-formatted JSON string containing structured data. ``` { "event":"data_json", ... "repo":1, # optional "t_abs":0.015905, "t_rel":0.015905, "nesting":1, "category":"process", "key":"windows/ancestry", "value":["bash.exe","bash.exe"] } ``` `"th_timer"` This event logs the amount of time that a stopwatch timer was running in the thread. This event is generated when a thread exits for timers that requested per-thread events. ``` { "event":"th_timer", ... "category":"my_category", "name":"my_timer", "intervals":5, # number of time it was started/stopped "t_total":0.052741, # total time in seconds it was running "t_min":0.010061, # shortest interval "t_max":0.011648 # longest interval } ``` `"timer"` This event logs the amount of time that a stopwatch timer was running aggregated across all threads. This event is generated when the process exits. ``` { "event":"timer", ... "category":"my_category", "name":"my_timer", "intervals":5, # number of time it was started/stopped "t_total":0.052741, # total time in seconds it was running "t_min":0.010061, # shortest interval "t_max":0.011648 # longest interval } ``` `"th_counter"` This event logs the value of a counter variable in a thread. This event is generated when a thread exits for counters that requested per-thread events. ``` { "event":"th_counter", ... "category":"my_category", "name":"my_counter", "count":23 } ``` `"counter"` This event logs the value of a counter variable across all threads. This event is generated when the process exits. The total value reported here is the sum across all threads. ``` { "event":"counter", ... "category":"my_category", "name":"my_counter", "count":23 } ``` Example trace2 api usage ------------------------ Here is a hypothetical usage of the Trace2 API showing the intended usage (without worrying about the actual Git details). Initialization Initialization happens in `main()`. Behind the scenes, an `atexit` and `signal` handler are registered. ``` int main(int argc, const char **argv) { int exit_code; trace2_initialize(); trace2_cmd_start(argv); exit_code = cmd_main(argc, argv); trace2_cmd_exit(exit_code); return exit_code; } ``` Command Details After the basics are established, additional command information can be sent to Trace2 as it is discovered. ``` int cmd_checkout(int argc, const char **argv) { trace2_cmd_name("checkout"); trace2_cmd_mode("branch"); trace2_def_repo(the_repository); // emit "def_param" messages for "interesting" config settings. trace2_cmd_list_config(); if (do_something()) trace2_cmd_error("Path '%s': cannot do something", path); return 0; } ``` Child Processes Wrap code spawning child processes. ``` void run_child(...) { int child_exit_code; struct child_process cmd = CHILD_PROCESS_INIT; ... cmd.trace2_child_class = "editor"; trace2_child_start(&cmd); child_exit_code = spawn_child_and_wait_for_it(); trace2_child_exit(&cmd, child_exit_code); } ``` For example, the following fetch command spawned ssh, index-pack, rev-list, and gc. This example also shows that fetch took 5.199 seconds and of that 4.932 was in ssh. ``` $ export GIT_TRACE2_BRIEF=1 $ export GIT_TRACE2=~/log.normal $ git fetch origin ... ``` ``` $ cat ~/log.normal version 2.20.1.vfs.1.1.47.g534dbe1ad1 start git fetch origin worktree /Users/jeffhost/work/gfw cmd_name fetch (fetch) child_start[0] ssh [email protected] ... child_start[1] git index-pack ... ... (Trace2 events from child processes omitted) child_exit[1] pid:14707 code:0 elapsed:0.076353 child_exit[0] pid:14706 code:0 elapsed:4.931869 child_start[2] git rev-list ... ... (Trace2 events from child process omitted) child_exit[2] pid:14708 code:0 elapsed:0.110605 child_start[3] git gc --auto ... (Trace2 events from child process omitted) child_exit[3] pid:14709 code:0 elapsed:0.006240 exit elapsed:5.198503 code:0 atexit elapsed:5.198541 code:0 ``` When a git process is a (direct or indirect) child of another git process, it inherits Trace2 context information. This allows the child to print the command hierarchy. This example shows gc as child[3] of fetch. When the gc process reports its name as "gc", it also reports the hierarchy as "fetch/gc". (In this example, trace2 messages from the child process is indented for clarity.) ``` $ export GIT_TRACE2_BRIEF=1 $ export GIT_TRACE2=~/log.normal $ git fetch origin ... ``` ``` $ cat ~/log.normal version 2.20.1.160.g5676107ecd.dirty start git fetch official worktree /Users/jeffhost/work/gfw cmd_name fetch (fetch) ... child_start[3] git gc --auto version 2.20.1.160.g5676107ecd.dirty start /Users/jeffhost/work/gfw/git gc --auto worktree /Users/jeffhost/work/gfw cmd_name gc (fetch/gc) exit elapsed:0.001959 code:0 atexit elapsed:0.001997 code:0 child_exit[3] pid:20303 code:0 elapsed:0.007564 exit elapsed:3.868938 code:0 atexit elapsed:3.868970 code:0 ``` Regions Regions can be used to time an interesting section of code. ``` void wt_status_collect(struct wt_status *s) { trace2_region_enter("status", "worktrees", s->repo); wt_status_collect_changes_worktree(s); trace2_region_leave("status", "worktrees", s->repo); trace2_region_enter("status", "index", s->repo); wt_status_collect_changes_index(s); trace2_region_leave("status", "index", s->repo); trace2_region_enter("status", "untracked", s->repo); wt_status_collect_untracked(s); trace2_region_leave("status", "untracked", s->repo); } void wt_status_print(struct wt_status *s) { trace2_region_enter("status", "print", s->repo); switch (s->status_format) { ... } trace2_region_leave("status", "print", s->repo); } ``` In this example, scanning for untracked files ran from +0.012568 to +0.027149 (since the process started) and took 0.014581 seconds. ``` $ export GIT_TRACE2_PERF_BRIEF=1 $ export GIT_TRACE2_PERF=~/log.perf $ git status ... $ cat ~/log.perf d0 | main | version | | | | | 2.20.1.160.g5676107ecd.dirty d0 | main | start | | 0.001173 | | | git status d0 | main | def_repo | r1 | | | | worktree:/Users/jeffhost/work/gfw d0 | main | cmd_name | | | | | status (status) ... d0 | main | region_enter | r1 | 0.010988 | | status | label:worktrees d0 | main | region_leave | r1 | 0.011236 | 0.000248 | status | label:worktrees d0 | main | region_enter | r1 | 0.011260 | | status | label:index d0 | main | region_leave | r1 | 0.012542 | 0.001282 | status | label:index d0 | main | region_enter | r1 | 0.012568 | | status | label:untracked d0 | main | region_leave | r1 | 0.027149 | 0.014581 | status | label:untracked d0 | main | region_enter | r1 | 0.027411 | | status | label:print d0 | main | region_leave | r1 | 0.028741 | 0.001330 | status | label:print d0 | main | exit | | 0.028778 | | | code:0 d0 | main | atexit | | 0.028809 | | | code:0 ``` Regions may be nested. This causes messages to be indented in the PERF target, for example. Elapsed times are relative to the start of the corresponding nesting level as expected. For example, if we add region message to: ``` static enum path_treatment read_directory_recursive(struct dir_struct *dir, struct index_state *istate, const char *base, int baselen, struct untracked_cache_dir *untracked, int check_only, int stop_at_first_file, const struct pathspec *pathspec) { enum path_treatment state, subdir_state, dir_state = path_none; trace2_region_enter_printf("dir", "read_recursive", NULL, "%.*s", baselen, base); ... trace2_region_leave_printf("dir", "read_recursive", NULL, "%.*s", baselen, base); return dir_state; } ``` We can further investigate the time spent scanning for untracked files. ``` $ export GIT_TRACE2_PERF_BRIEF=1 $ export GIT_TRACE2_PERF=~/log.perf $ git status ... $ cat ~/log.perf d0 | main | version | | | | | 2.20.1.162.gb4ccea44db.dirty d0 | main | start | | 0.001173 | | | git status d0 | main | def_repo | r1 | | | | worktree:/Users/jeffhost/work/gfw d0 | main | cmd_name | | | | | status (status) ... d0 | main | region_enter | r1 | 0.015047 | | status | label:untracked d0 | main | region_enter | | 0.015132 | | dir | ..label:read_recursive d0 | main | region_enter | | 0.016341 | | dir | ....label:read_recursive vcs-svn/ d0 | main | region_leave | | 0.016422 | 0.000081 | dir | ....label:read_recursive vcs-svn/ d0 | main | region_enter | | 0.016446 | | dir | ....label:read_recursive xdiff/ d0 | main | region_leave | | 0.016522 | 0.000076 | dir | ....label:read_recursive xdiff/ d0 | main | region_enter | | 0.016612 | | dir | ....label:read_recursive git-gui/ d0 | main | region_enter | | 0.016698 | | dir | ......label:read_recursive git-gui/po/ d0 | main | region_enter | | 0.016810 | | dir | ........label:read_recursive git-gui/po/glossary/ d0 | main | region_leave | | 0.016863 | 0.000053 | dir | ........label:read_recursive git-gui/po/glossary/ ... d0 | main | region_enter | | 0.031876 | | dir | ....label:read_recursive builtin/ d0 | main | region_leave | | 0.032270 | 0.000394 | dir | ....label:read_recursive builtin/ d0 | main | region_leave | | 0.032414 | 0.017282 | dir | ..label:read_recursive d0 | main | region_leave | r1 | 0.032454 | 0.017407 | status | label:untracked ... d0 | main | exit | | 0.034279 | | | code:0 d0 | main | atexit | | 0.034322 | | | code:0 ``` Trace2 regions are similar to the existing trace\_performance\_enter() and trace\_performance\_leave() routines, but are thread safe and maintain per-thread stacks of timers. Data Messages Data messages added to a region. ``` int read_index_from(struct index_state *istate, const char *path, const char *gitdir) { trace2_region_enter_printf("index", "do_read_index", the_repository, "%s", path); ... trace2_data_intmax("index", the_repository, "read/version", istate->version); trace2_data_intmax("index", the_repository, "read/cache_nr", istate->cache_nr); trace2_region_leave_printf("index", "do_read_index", the_repository, "%s", path); } ``` This example shows that the index contained 3552 entries. ``` $ export GIT_TRACE2_PERF_BRIEF=1 $ export GIT_TRACE2_PERF=~/log.perf $ git status ... $ cat ~/log.perf d0 | main | version | | | | | 2.20.1.156.gf9916ae094.dirty d0 | main | start | | 0.001173 | | | git status d0 | main | def_repo | r1 | | | | worktree:/Users/jeffhost/work/gfw d0 | main | cmd_name | | | | | status (status) d0 | main | region_enter | r1 | 0.001791 | | index | label:do_read_index .git/index d0 | main | data | r1 | 0.002494 | 0.000703 | index | ..read/version:2 d0 | main | data | r1 | 0.002520 | 0.000729 | index | ..read/cache_nr:3552 d0 | main | region_leave | r1 | 0.002539 | 0.000748 | index | label:do_read_index .git/index ... ``` Thread Events Thread messages added to a thread-proc. For example, the multi-threaded preload-index code can be instrumented with a region around the thread pool and then per-thread start and exit events within the thread-proc. ``` static void *preload_thread(void *_data) { // start the per-thread clock and emit a message. trace2_thread_start("preload_thread"); // report which chunk of the array this thread was assigned. trace2_data_intmax("index", the_repository, "offset", p->offset); trace2_data_intmax("index", the_repository, "count", nr); do { ... } while (--nr > 0); ... // report elapsed time taken by this thread. trace2_thread_exit(); return NULL; } void preload_index(struct index_state *index, const struct pathspec *pathspec, unsigned int refresh_flags) { trace2_region_enter("index", "preload", the_repository); for (i = 0; i < threads; i++) { ... /* create thread */ } for (i = 0; i < threads; i++) { ... /* join thread */ } trace2_region_leave("index", "preload", the_repository); } ``` In this example preload\_index() was executed by the `main` thread and started the `preload` region. Seven threads, named `th01:preload_thread` through `th07:preload_thread`, were started. Events from each thread are atomically appended to the shared target stream as they occur so they may appear in random order with respect other threads. Finally, the main thread waits for the threads to finish and leaves the region. Data events are tagged with the active thread name. They are used to report the per-thread parameters. ``` $ export GIT_TRACE2_PERF_BRIEF=1 $ export GIT_TRACE2_PERF=~/log.perf $ git status ... $ cat ~/log.perf ... d0 | main | region_enter | r1 | 0.002595 | | index | label:preload d0 | th01:preload_thread | thread_start | | 0.002699 | | | d0 | th02:preload_thread | thread_start | | 0.002721 | | | d0 | th01:preload_thread | data | r1 | 0.002736 | 0.000037 | index | offset:0 d0 | th02:preload_thread | data | r1 | 0.002751 | 0.000030 | index | offset:2032 d0 | th03:preload_thread | thread_start | | 0.002711 | | | d0 | th06:preload_thread | thread_start | | 0.002739 | | | d0 | th01:preload_thread | data | r1 | 0.002766 | 0.000067 | index | count:508 d0 | th06:preload_thread | data | r1 | 0.002856 | 0.000117 | index | offset:2540 d0 | th03:preload_thread | data | r1 | 0.002824 | 0.000113 | index | offset:1016 d0 | th04:preload_thread | thread_start | | 0.002710 | | | d0 | th02:preload_thread | data | r1 | 0.002779 | 0.000058 | index | count:508 d0 | th06:preload_thread | data | r1 | 0.002966 | 0.000227 | index | count:508 d0 | th07:preload_thread | thread_start | | 0.002741 | | | d0 | th07:preload_thread | data | r1 | 0.003017 | 0.000276 | index | offset:3048 d0 | th05:preload_thread | thread_start | | 0.002712 | | | d0 | th05:preload_thread | data | r1 | 0.003067 | 0.000355 | index | offset:1524 d0 | th05:preload_thread | data | r1 | 0.003090 | 0.000378 | index | count:508 d0 | th07:preload_thread | data | r1 | 0.003037 | 0.000296 | index | count:504 d0 | th03:preload_thread | data | r1 | 0.002971 | 0.000260 | index | count:508 d0 | th04:preload_thread | data | r1 | 0.002983 | 0.000273 | index | offset:508 d0 | th04:preload_thread | data | r1 | 0.007311 | 0.004601 | index | count:508 d0 | th05:preload_thread | thread_exit | | 0.008781 | 0.006069 | | d0 | th01:preload_thread | thread_exit | | 0.009561 | 0.006862 | | d0 | th03:preload_thread | thread_exit | | 0.009742 | 0.007031 | | d0 | th06:preload_thread | thread_exit | | 0.009820 | 0.007081 | | d0 | th02:preload_thread | thread_exit | | 0.010274 | 0.007553 | | d0 | th07:preload_thread | thread_exit | | 0.010477 | 0.007736 | | d0 | th04:preload_thread | thread_exit | | 0.011657 | 0.008947 | | d0 | main | region_leave | r1 | 0.011717 | 0.009122 | index | label:preload ... d0 | main | exit | | 0.029996 | | | code:0 d0 | main | atexit | | 0.030027 | | | code:0 ``` In this example, the preload region took 0.009122 seconds. The 7 threads took between 0.006069 and 0.008947 seconds to work on their portion of the index. Thread "th01" worked on 508 items at offset 0. Thread "th02" worked on 508 items at offset 2032. Thread "th04" worked on 508 items at offset 508. This example also shows that thread names are assigned in a racy manner as each thread starts. Config (def param) Events Dump "interesting" config values to trace2 log. We can optionally emit configuration events, see `trace2.configparams` in [git-config[1]](git-config) for how to enable it. ``` $ git config --system color.ui never $ git config --global color.ui always $ git config --local color.ui auto $ git config --list --show-scope | grep 'color.ui' system color.ui=never global color.ui=always local color.ui=auto ``` Then, mark the config `color.ui` as "interesting" config with `GIT_TRACE2_CONFIG_PARAMS`: ``` $ export GIT_TRACE2_PERF_BRIEF=1 $ export GIT_TRACE2_PERF=~/log.perf $ export GIT_TRACE2_CONFIG_PARAMS=color.ui $ git version ... $ cat ~/log.perf d0 | main | version | | | | | ... d0 | main | start | | 0.001642 | | | /usr/local/bin/git version d0 | main | cmd_name | | | | | version (version) d0 | main | def_param | | | | scope:system | color.ui:never d0 | main | def_param | | | | scope:global | color.ui:always d0 | main | def_param | | | | scope:local | color.ui:auto d0 | main | data | r0 | 0.002100 | 0.002100 | fsync | fsync/writeout-only:0 d0 | main | data | r0 | 0.002126 | 0.002126 | fsync | fsync/hardware-flush:0 d0 | main | exit | | 0.000470 | | | code:0 d0 | main | atexit | | 0.000477 | | | code:0 ``` Stopwatch Timer Events Measure the time spent in a function call or span of code that might be called from many places within the code throughout the life of the process. ``` static void expensive_function(void) { trace2_timer_start(TRACE2_TIMER_ID_TEST1); ... sleep_millisec(1000); // Do something expensive ... trace2_timer_stop(TRACE2_TIMER_ID_TEST1); } static int ut_100timer(int argc, const char **argv) { ... expensive_function(); // Do something else 1... expensive_function(); // Do something else 2... expensive_function(); return 0; } ``` In this example, we measure the total time spent in `expensive_function()` regardless of when it is called in the overall flow of the program. ``` $ export GIT_TRACE2_PERF_BRIEF=1 $ export GIT_TRACE2_PERF=~/log.perf $ t/helper/test-tool trace2 100timer 3 1000 ... $ cat ~/log.perf d0 | main | version | | | | | ... d0 | main | start | | 0.001453 | | | t/helper/test-tool trace2 100timer 3 1000 d0 | main | cmd_name | | | | | trace2 (trace2) d0 | main | exit | | 3.003667 | | | code:0 d0 | main | timer | | | | test | name:test1 intervals:3 total:3.001686 min:1.000254 max:1.000929 d0 | main | atexit | | 3.003796 | | | code:0 ``` Future work ----------- ### Relationship to the Existing Trace Api (api-trace.txt) There are a few issues to resolve before we can completely switch to Trace2. * Updating existing tests that assume `GIT_TRACE` format messages. * How to best handle custom `GIT_TRACE_<key>` messages? + The `GIT_TRACE_<key>` mechanism allows each <key> to write to a different file (in addition to just stderr). + Do we want to maintain that ability or simply write to the existing Trace2 targets (and convert <key> to a "category").
programming_docs
git gitmailmap gitmailmap ========== Name ---- gitmailmap - Map author/committer names and/or E-Mail addresses Synopsis -------- $GIT\_WORK\_TREE/.mailmap Description ----------- If the file `.mailmap` exists at the toplevel of the repository, or at the location pointed to by the `mailmap.file` or `mailmap.blob` configuration options (see [git-config[1]](git-config)), it is used to map author and committer names and email addresses to canonical real names and email addresses. Syntax ------ The `#` character begins a comment to the end of line, blank lines are ignored. In the simple form, each line in the file consists of the canonical real name of an author, whitespace, and an email address used in the commit (enclosed by `<` and `>`) to map to the name. For example: ``` Proper Name <[email protected]> ``` The more complex forms are: ``` <[email protected]> <[email protected]> ``` which allows mailmap to replace only the email part of a commit, and: ``` Proper Name <[email protected]> <[email protected]> ``` which allows mailmap to replace both the name and the email of a commit matching the specified commit email address, and: ``` Proper Name <[email protected]> Commit Name <[email protected]> ``` which allows mailmap to replace both the name and the email of a commit matching both the specified commit name and email address. Both E-Mails and names are matched case-insensitively. For example this would also match the `Commit Name <[email protected]>` above: ``` Proper Name <[email protected]> CoMmIt NaMe <[email protected]> ``` Notes ----- Git does not follow symbolic links when accessing a `.mailmap` file in the working tree. This keeps behavior consistent when the file is accessed from the index or a tree versus from the filesystem. Examples -------- Your history contains commits by two authors, Jane and Joe, whose names appear in the repository under several forms: ``` Joe Developer <[email protected]> Joe R. Developer <[email protected]> Jane Doe <[email protected]> Jane Doe <jane@laptop.(none)> Jane D. <jane@desktop.(none)> ``` Now suppose that Joe wants his middle name initial used, and Jane prefers her family name fully spelled out. A `.mailmap` file to correct the names would look like: ``` Joe R. Developer <[email protected]> Jane Doe <[email protected]> Jane Doe <jane@desktop.(none)> ``` Note that there’s no need to map the name for `<jane@laptop.(none)>` to only correct the names. However, leaving the obviously broken `<jane@laptop.(none)>` and `<jane@desktop.(none)>` E-Mails as-is is usually not what you want. A `.mailmap` file which also corrects those is: ``` Joe R. Developer <[email protected]> Jane Doe <[email protected]> <jane@laptop.(none)> Jane Doe <[email protected]> <jane@desktop.(none)> ``` Finally, let’s say that Joe and Jane shared an E-Mail address, but not a name, e.g. by having these two commits in the history generated by a bug reporting system. I.e. names appearing in history as: ``` Joe <[email protected]> Jane <[email protected]> ``` A full `.mailmap` file which also handles those cases (an addition of two lines to the above example) would be: ``` Joe R. Developer <[email protected]> Jane Doe <[email protected]> <jane@laptop.(none)> Jane Doe <[email protected]> <jane@desktop.(none)> Joe R. Developer <[email protected]> Joe <[email protected]> Jane Doe <[email protected]> Jane <[email protected]> ``` See also -------- [git-check-mailmap[1]](git-check-mailmap) git git-pack-refs git-pack-refs ============= Name ---- git-pack-refs - Pack heads and tags for efficient repository access Synopsis -------- ``` git pack-refs [--all] [--no-prune] ``` Description ----------- Traditionally, tips of branches and tags (collectively known as `refs`) were stored one file per ref in a (sub)directory under `$GIT_DIR/refs` directory. While many branch tips tend to be updated often, most tags and some branch tips are never updated. When a repository has hundreds or thousands of tags, this one-file-per-ref format both wastes storage and hurts performance. This command is used to solve the storage and performance problem by storing the refs in a single file, `$GIT_DIR/packed-refs`. When a ref is missing from the traditional `$GIT_DIR/refs` directory hierarchy, it is looked up in this file and used if found. Subsequent updates to branches always create new files under `$GIT_DIR/refs` directory hierarchy. A recommended practice to deal with a repository with too many refs is to pack its refs with `--all` once, and occasionally run `git pack-refs`. Tags are by definition stationary and are not expected to change. Branch heads will be packed with the initial `pack-refs --all`, but only the currently active branch heads will become unpacked, and the next `pack-refs` (without `--all`) will leave them unpacked. Options ------- --all The command by default packs all tags and refs that are already packed, and leaves other refs alone. This is because branches are expected to be actively developed and packing their tips does not help performance. This option causes branch tips to be packed as well. Useful for a repository with many branches of historical interests. --no-prune The command usually removes loose refs under `$GIT_DIR/refs` hierarchy after packing them. This option tells it not to. Bugs ---- Older documentation written before the packed-refs mechanism was introduced may still say things like ".git/refs/heads/<branch> file exists" when it means "branch <branch> exists". git git-upload-pack git-upload-pack =============== Name ---- git-upload-pack - Send objects packed back to git-fetch-pack Synopsis -------- ``` git-upload-pack [--[no-]strict] [--timeout=<n>] [--stateless-rpc] [--advertise-refs] <directory> ``` Description ----------- Invoked by `git fetch-pack`, learns what objects the other side is missing, and sends them after packing. This command is usually not invoked directly by the end user. The UI for the protocol is on the `git fetch-pack` side, and the program pair is meant to be used to pull updates from a remote repository. For push operations, see `git send-pack`. Options ------- --[no-]strict Do not try <directory>/.git/ if <directory> is no Git directory. --timeout=<n> Interrupt transfer after <n> seconds of inactivity. --stateless-rpc Perform only a single read-write cycle with stdin and stdout. This fits with the HTTP POST request processing model where a program may read the request, write a response, and must exit. --http-backend-info-refs Used by [git-http-backend[1]](git-http-backend) to serve up `$GIT_URL/info/refs?service=git-upload-pack` requests. See "Smart Clients" in [gitprotocol-http[5]](gitprotocol-http) and "HTTP Transport" in the [gitprotocol-v2[5]](gitprotocol-v2) documentation. Also understood by [git-receive-pack[1]](git-receive-pack). <directory> The repository to sync from. Environment ----------- `GIT_PROTOCOL` Internal variable used for handshaking the wire protocol. Server admins may need to configure some transports to allow this variable to be passed. See the discussion in [git[1]](git). See also -------- [gitnamespaces[7]](gitnamespaces) git git-format-patch git-format-patch ================ Name ---- git-format-patch - Prepare patches for e-mail submission Synopsis -------- ``` git format-patch [-k] [(-o|--output-directory) <dir> | --stdout] [--no-thread | --thread[=<style>]] [(--attach|--inline)[=<boundary>] | --no-attach] [-s | --signoff] [--signature=<signature> | --no-signature] [--signature-file=<file>] [-n | --numbered | -N | --no-numbered] [--start-number <n>] [--numbered-files] [--in-reply-to=<message id>] [--suffix=.<sfx>] [--ignore-if-in-upstream] [--always] [--cover-from-description=<mode>] [--rfc] [--subject-prefix=<subject prefix>] [(--reroll-count|-v) <n>] [--to=<email>] [--cc=<email>] [--[no-]cover-letter] [--quiet] [--[no-]encode-email-headers] [--no-notes | --notes[=<ref>]] [--interdiff=<previous>] [--range-diff=<previous> [--creation-factor=<percent>]] [--filename-max-length=<n>] [--progress] [<common diff options>] [ <since> | <revision range> ] ``` Description ----------- Prepare each non-merge commit with its "patch" in one "message" per commit, formatted to resemble a UNIX mailbox. The output of this command is convenient for e-mail submission or for use with `git am`. A "message" generated by the command consists of three parts: * A brief metadata header that begins with `From <commit>` with a fixed `Mon Sep 17 00:00:00 2001` datestamp to help programs like "file(1)" to recognize that the file is an output from this command, fields that record the author identity, the author date, and the title of the change (taken from the first paragraph of the commit log message). * The second and subsequent paragraphs of the commit log message. * The "patch", which is the "diff -p --stat" output (see [git-diff[1]](git-diff)) between the commit and its parent. The log message and the patch is separated by a line with a three-dash line. There are two ways to specify which commits to operate on. 1. A single commit, <since>, specifies that the commits leading to the tip of the current branch that are not in the history that leads to the <since> to be output. 2. Generic <revision range> expression (see "SPECIFYING REVISIONS" section in [gitrevisions[7]](gitrevisions)) means the commits in the specified range. The first rule takes precedence in the case of a single <commit>. To apply the second rule, i.e., format everything since the beginning of history up until <commit>, use the `--root` option: `git format-patch --root <commit>`. If you want to format only <commit> itself, you can do this with `git format-patch -1 <commit>`. By default, each output file is numbered sequentially from 1, and uses the first line of the commit message (massaged for pathname safety) as the filename. With the `--numbered-files` option, the output file names will only be numbers, without the first line of the commit appended. The names of the output files are printed to standard output, unless the `--stdout` option is specified. If `-o` is specified, output files are created in <dir>. Otherwise they are created in the current working directory. The default path can be set with the `format.outputDirectory` configuration option. The `-o` option takes precedence over `format.outputDirectory`. To store patches in the current working directory even when `format.outputDirectory` points elsewhere, use `-o .`. All directory components will be created. By default, the subject of a single patch is "[PATCH] " followed by the concatenation of lines from the commit message up to the first blank line (see the DISCUSSION section of [git-commit[1]](git-commit)). When multiple patches are output, the subject prefix will instead be "[PATCH n/m] ". To force 1/1 to be added for a single patch, use `-n`. To omit patch numbers from the subject, use `-N`. If given `--thread`, `git-format-patch` will generate `In-Reply-To` and `References` headers to make the second and subsequent patch mails appear as replies to the first mail; this also generates a `Message-Id` header to reference. Options ------- -p --no-stat Generate plain patches without any diffstats. -U<n> --unified=<n> Generate diffs with <n> lines of context instead of the usual three. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char> --output-indicator-old=<char> --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are `+`, `-` and ' ' respectively. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the `diff.algorithm` variable to a non-default value and want to use the default one, then you have to use `--diff-algorithm=default` option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by `<width>`. The width of the filename part can be limited by giving another width `<name-width>` after a comma. The width of the graph part can be limited by using `--stat-graph-width=<width>` (affects all commands generating a stat graph) or by setting `diff.statGraphWidth=<width>` (does not affect `git format-patch`). By giving a third parameter `<count>`, you can limit the output to the first `<count>` lines, followed by `...` if there are more. These parameters can also be set individually with `--stat-width=<width>`, `--stat-name-width=<name-width>` and `--stat-count=<count>`. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies `--stat`. --numstat Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. --shortstat Output only the last line of the `--stat` format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,…​>] --dirstat[=<param1,param2,…​>] Output the distribution of relative amount of changes for each sub-directory. The behavior of `--dirstat` can be customized by passing it a comma separated list of parameters. The defaults are controlled by the `diff.dirstat` configuration variable (see [git-config[1]](git-config)). The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `--dirstat=files,10,cumulative`. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>…​] Synonym for --dirstat=files,param1,param2…​ --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to `--full-index`, output a binary diff that can be applied with `git-apply`. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. In diff-patch output format, `--full-index` takes higher precedence, i.e. if `--full-index` is specified, full blob names will be shown regardless of `--abbrev`. Non default number of digits can be specified with `--abbrev=<n>`. -B[<n>][/<m>] --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number `m` controls this aspect of the -B option (defaults to 60%). `-B/70%` specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number `n` controls this aspect of the -B option (defaults to 50%). `-B20%` specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>] --find-renames[=<n>] Detect renames. If `n` is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, `-M90%` means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a `%` sign, the number is to be read as a fraction, with a decimal point before it. I.e., `-M5` becomes 0.5, and is thus the same as `-M50%`. Similarly, `-M05` is the same as `-M5%`. To limit detection to exact renames, use `-M100%`. The default similarity index is 50%. -C[<n>] --find-copies[=<n>] Detect copies as well as renames. See also `--find-copies-harder`. If `n` is specified, it has the same meaning as for `-M<n>`. --find-copies-harder For performance reasons, by default, `-C` option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one `-C` option has the same effect. -D --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and `/dev/null`. The resulting patch is not meant to be applied with `patch` or `git apply`; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with `-B`, omit also the preimage in the deletion part of a delete/create pair. -l<num> The `-M` and `-C` options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. -O<orderfile> Control the order in which files appear in the output. This overrides the `diff.orderFile` configuration variable (see [git-config[1]](git-config)). To cancel `diff.orderFile`, use `-O/dev/null`. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: * Blank lines are ignored, so they can be used as separators for readability. * Lines starting with a hash ("`#`") are ignored, so they can be used for comments. Add a backslash ("`\`") to the beginning of the pattern if it starts with a hash. * Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM\_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "`foo*bar`" matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`". --skip-to=<file> --rotate-to=<file> Discard the files before the named <file> from the output (i.e. `skip to`), or move them to the end of the output (i.e. `rotate to`). These were invented primarily for use of the `git difftool` command, and may not be very useful otherwise. --relative[=<path>] --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. `--no-relative` can be used to countermand both `diff.relative` config option and previous `--relative`. -a --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex> --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to `diff.interHunkContext` or 0 if the config option is unset. -W --function-context Show whole function as context lines for each change. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with [gitattributes[5]](gitattributes), you need to use this option with [git-log[1]](git-log) and friends. --no-ext-diff Disallow external diff drivers. --textconv --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See [gitattributes[5]](gitattributes) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for [git-diff[1]](git-diff) and [git-log[1]](git-log), but not for [git-format-patch[1]](git-format-patch) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the `ignore` option in [git-config[1]](git-config) or [gitmodules[5]](gitmodules). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with `--ita-visible-in-index`. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also [gitdiffcore[7]](gitdiffcore). -<n> Prepare patches from the topmost <n> commits. -o <dir> --output-directory <dir> Use <dir> to store the resulting files, instead of the current working directory. -n --numbered Name output in `[PATCH n/m]` format, even with a single patch. -N --no-numbered Name output in `[PATCH]` format. --start-number <n> Start numbering the patches at <n> instead of 1. --numbered-files Output file names will be a simple number sequence without the default first line of the commit appended. -k --keep-subject Do not strip/add `[PATCH]` from the first line of the commit log message. -s --signoff Add a `Signed-off-by` trailer to the commit message, using the committer identity of yourself. See the signoff option in [git-commit[1]](git-commit) for more information. --stdout Print all commits to the standard output in mbox format, instead of creating a file for each one. --attach[=<boundary>] Create multipart/mixed attachment, the first part of which is the commit message and the patch itself in the second part, with `Content-Disposition: attachment`. --no-attach Disable the creation of an attachment, overriding the configuration setting. --inline[=<boundary>] Create multipart/mixed attachment, the first part of which is the commit message and the patch itself in the second part, with `Content-Disposition: inline`. --thread[=<style>] --no-thread Controls addition of `In-Reply-To` and `References` headers to make the second and subsequent mails appear as replies to the first. Also controls generation of the `Message-Id` header to reference. The optional <style> argument can be either `shallow` or `deep`. `shallow` threading makes every mail a reply to the head of the series, where the head is chosen from the cover letter, the `--in-reply-to`, and the first patch mail, in this order. `deep` threading makes every mail a reply to the previous one. The default is `--no-thread`, unless the `format.thread` configuration is set. If `--thread` is specified without a style, it defaults to the style specified by `format.thread` if any, or else `shallow`. Beware that the default for `git send-email` is to thread emails itself. If you want `git format-patch` to take care of threading, you will want to ensure that threading is disabled for `git send-email`. --in-reply-to=<message id> Make the first mail (or all the mails with `--no-thread`) appear as a reply to the given <message id>, which avoids breaking threads to provide a new patch series. --ignore-if-in-upstream Do not include a patch that matches a commit in <until>..<since>. This will examine all patches reachable from <since> but not from <until> and compare them with the patches being generated, and any patch that matches is ignored. --always Include patches for commits that do not introduce any change, which are omitted by default. --cover-from-description=<mode> Controls which parts of the cover letter will be automatically populated using the branch’s description. If `<mode>` is `message` or `default`, the cover letter subject will be populated with placeholder text. The body of the cover letter will be populated with the branch’s description. This is the default mode when no configuration nor command line option is specified. If `<mode>` is `subject`, the first paragraph of the branch description will populate the cover letter subject. The remainder of the description will populate the body of the cover letter. If `<mode>` is `auto`, if the first paragraph of the branch description is greater than 100 bytes, then the mode will be `message`, otherwise `subject` will be used. If `<mode>` is `none`, both the cover letter subject and body will be populated with placeholder text. --subject-prefix=<subject prefix> Instead of the standard `[PATCH]` prefix in the subject line, instead use `[<subject prefix>]`. This allows for useful naming of a patch series, and can be combined with the `--numbered` option. --filename-max-length=<n> Instead of the standard 64 bytes, chomp the generated output filenames at around `<n>` bytes (too short a value will be silently raised to a reasonable length). Defaults to the value of the `format.filenameMaxLength` configuration variable, or 64 if unconfigured. --rfc Alias for `--subject-prefix="RFC PATCH"`. RFC means "Request For Comments"; use this when sending an experimental patch for discussion rather than application. -v <n> --reroll-count=<n> Mark the series as the <n>-th iteration of the topic. The output filenames have `v<n>` prepended to them, and the subject prefix ("PATCH" by default, but configurable via the `--subject-prefix` option) has ` v<n>` appended to it. E.g. `--reroll-count=4` may produce `v4-0001-add-makefile.patch` file that has "Subject: [PATCH v4 1/20] Add makefile" in it. `<n>` does not have to be an integer (e.g. "--reroll-count=4.4", or "--reroll-count=4rev2" are allowed), but the downside of using such a reroll-count is that the range-diff/interdiff with the previous version does not state exactly which version the new interation is compared against. --to=<email> Add a `To:` header to the email headers. This is in addition to any configured headers, and may be used multiple times. The negated form `--no-to` discards all `To:` headers added so far (from config or command line). --cc=<email> Add a `Cc:` header to the email headers. This is in addition to any configured headers, and may be used multiple times. The negated form `--no-cc` discards all `Cc:` headers added so far (from config or command line). --from --from=<ident> Use `ident` in the `From:` header of each commit email. If the author ident of the commit is not textually identical to the provided `ident`, place a `From:` header in the body of the message with the original author. If no `ident` is given, use the committer ident. Note that this option is only useful if you are actually sending the emails and want to identify yourself as the sender, but retain the original author (and `git am` will correctly pick up the in-body header). Note also that `git send-email` already handles this transformation for you, and this option should not be used if you are feeding the result to `git send-email`. --[no-]force-in-body-from With the e-mail sender specified via the `--from` option, by default, an in-body "From:" to identify the real author of the commit is added at the top of the commit log message if the sender is different from the author. With this option, the in-body "From:" is added even when the sender and the author have the same name and address, which may help if the mailing list software mangles the sender’s identity. Defaults to the value of the `format.forceInBodyFrom` configuration variable. --add-header=<header> Add an arbitrary header to the email headers. This is in addition to any configured headers, and may be used multiple times. For example, `--add-header="Organization: git-foo"`. The negated form `--no-add-header` discards **all** (`To:`, `Cc:`, and custom) headers added so far from config or command line. --[no-]cover-letter In addition to the patches, generate a cover letter file containing the branch description, shortlog and the overall diffstat. You can fill in a description in the file before sending it out. --encode-email-headers --no-encode-email-headers Encode email headers that have non-ASCII characters with "Q-encoding" (described in RFC 2047), instead of outputting the headers verbatim. Defaults to the value of the `format.encodeEmailHeaders` configuration variable. --interdiff=<previous> As a reviewer aid, insert an interdiff into the cover letter, or as commentary of the lone patch of a 1-patch series, showing the differences between the previous version of the patch series and the series currently being formatted. `previous` is a single revision naming the tip of the previous series which shares a common base with the series being formatted (for example `git format-patch --cover-letter --interdiff=feature/v1 -3 feature/v2`). --range-diff=<previous> As a reviewer aid, insert a range-diff (see [git-range-diff[1]](git-range-diff)) into the cover letter, or as commentary of the lone patch of a 1-patch series, showing the differences between the previous version of the patch series and the series currently being formatted. `previous` can be a single revision naming the tip of the previous series if it shares a common base with the series being formatted (for example `git format-patch --cover-letter --range-diff=feature/v1 -3 feature/v2`), or a revision range if the two versions of the series are disjoint (for example `git format-patch --cover-letter --range-diff=feature/v1~3..feature/v1 -3 feature/v2`). Note that diff options passed to the command affect how the primary product of `format-patch` is generated, and they are not passed to the underlying `range-diff` machinery used to generate the cover-letter material (this may change in the future). --creation-factor=<percent> Used with `--range-diff`, tweak the heuristic which matches up commits between the previous and current series of patches by adjusting the creation/deletion cost fudge factor. See [git-range-diff[1]](git-range-diff)) for details. --notes[=<ref>] --no-notes Append the notes (see [git-notes[1]](git-notes)) for the commit after the three-dash line. The expected use case of this is to write supporting explanation for the commit that does not belong to the commit log message proper, and include it with the patch submission. While one can simply write these explanations after `format-patch` has run but before sending, keeping them as Git notes allows them to be maintained between versions of the patch series (but see the discussion of the `notes.rewrite` configuration options in [git-notes[1]](git-notes) to use this workflow). The default is `--no-notes`, unless the `format.notes` configuration is set. --[no-]signature=<signature> Add a signature to each message produced. Per RFC 3676 the signature is separated from the body by a line with '-- ' on it. If the signature option is omitted the signature defaults to the Git version number. --signature-file=<file> Works just like --signature except the signature is read from a file. --suffix=.<sfx> Instead of using `.patch` as the suffix for generated filenames, use specified suffix. A common alternative is `--suffix=.txt`. Leaving this empty will remove the `.patch` suffix. Note that the leading character does not have to be a dot; for example, you can use `--suffix=-patch` to get `0001-description-of-my-change-patch`. -q --quiet Do not print the names of the generated files to standard output. --no-binary Do not output contents of changes in binary files, instead display a notice that those files changed. Patches generated using this option cannot be applied properly, but they are still useful for code review. --zero-commit Output an all-zero hash in each patch’s From header instead of the hash of the commit. --[no-]base[=<commit>] Record the base tree information to identify the state the patch series applies to. See the BASE TREE INFORMATION section below for details. If <commit> is "auto", a base commit is automatically chosen. The `--no-base` option overrides a `format.useAutoBase` configuration. --root Treat the revision argument as a <revision range>, even if it is just a single commit (that would normally be treated as a <since>). Note that root commits included in the specified range are always formatted as creation patches, independently of this flag. --progress Show progress reports on stderr as patches are generated. Configuration ------------- You can specify extra mail header lines to be added to each message, defaults for the subject prefix and file suffix, number patches when outputting more than one patch, add "To:" or "Cc:" headers, configure attachments, change the patch output directory, and sign off patches with configuration variables. ``` [format] headers = "Organization: git-foo\n" subjectPrefix = CHANGE suffix = .txt numbered = auto to = <email> cc = <email> attach [ = mime-boundary-string ] signOff = true outputDirectory = <directory> coverLetter = auto coverFromDescription = auto ``` Discussion ---------- The patch produced by `git format-patch` is in UNIX mailbox format, with a fixed "magic" time stamp to indicate that the file is output from format-patch rather than a real mailbox, like so: ``` From 8f72bad1baf19a53459661343e21d6491c3908d3 Mon Sep 17 00:00:00 2001 From: Tony Luck <[email protected]> Date: Tue, 13 Jul 2010 11:42:54 -0700 Subject: [PATCH] =?UTF-8?q?[IA64]=20Put=20ia64=20config=20files=20on=20the=20?= =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig=20diet?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit arch/arm config files were slimmed down using a python script (See commit c2330e286f68f1c408b4aa6515ba49d57f05beae comment) Do the same for ia64 so we can have sleek & trim looking ... ``` Typically it will be placed in a MUA’s drafts folder, edited to add timely commentary that should not go in the changelog after the three dashes, and then sent as a message whose body, in our example, starts with "arch/arm config files were…​". On the receiving end, readers can save interesting patches in a UNIX mailbox and apply them with [git-am[1]](git-am). When a patch is part of an ongoing discussion, the patch generated by `git format-patch` can be tweaked to take advantage of the `git am --scissors` feature. After your response to the discussion comes a line that consists solely of "`-- >8 --`" (scissors and perforation), followed by the patch with unnecessary header fields removed: ``` ... > So we should do such-and-such. Makes sense to me. How about this patch? -- >8 -- Subject: [IA64] Put ia64 config files on the Uwe Kleine-König diet arch/arm config files were slimmed down using a python script ... ``` When sending a patch this way, most often you are sending your own patch, so in addition to the "`From $SHA1 $magic_timestamp`" marker you should omit `From:` and `Date:` lines from the patch file. The patch title is likely to be different from the subject of the discussion the patch is in response to, so it is likely that you would want to keep the Subject: line, like the example above. ### Checking for patch corruption Many mailers if not set up properly will corrupt whitespace. Here are two common types of corruption: * Empty context lines that do not have `any` whitespace. * Non-empty context lines that have one extra whitespace at the beginning. One way to test if your MUA is set up correctly is: * Send the patch to yourself, exactly the way you would, except with To: and Cc: lines that do not contain the list and maintainer address. * Save that patch to a file in UNIX mailbox format. Call it a.patch, say. * Apply it: ``` $ git fetch <project> master:test-apply $ git switch test-apply $ git restore --source=HEAD --staged --worktree :/ $ git am a.patch ``` If it does not apply correctly, there can be various reasons. * The patch itself does not apply cleanly. That is `bad` but does not have much to do with your MUA. You might want to rebase the patch with [git-rebase[1]](git-rebase) before regenerating it in this case. * The MUA corrupted your patch; "am" would complain that the patch does not apply. Look in the .git/rebase-apply/ subdirectory and see what `patch` file contains and check for the common corruption patterns mentioned above. * While at it, check the `info` and `final-commit` files as well. If what is in `final-commit` is not exactly what you would want to see in the commit log message, it is very likely that the receiver would end up hand editing the log message when applying your patch. Things like "Hi, this is my first patch.\n" in the patch e-mail should come after the three-dash line that signals the end of the commit message. Mua-specific hints ------------------ Here are some hints on how to successfully submit patches inline using various mailers. ### GMail GMail does not have any way to turn off line wrapping in the web interface, so it will mangle any emails that you send. You can however use "git send-email" and send your patches through the GMail SMTP server, or use any IMAP email client to connect to the google IMAP server and forward the emails through that. For hints on using `git send-email` to send your patches through the GMail SMTP server, see the EXAMPLE section of [git-send-email[1]](git-send-email). For hints on submission using the IMAP interface, see the EXAMPLE section of [git-imap-send[1]](git-imap-send). ### Thunderbird By default, Thunderbird will both wrap emails as well as flag them as being `format=flowed`, both of which will make the resulting email unusable by Git. There are three different approaches: use an add-on to turn off line wraps, configure Thunderbird to not mangle patches, or use an external editor to keep Thunderbird from mangling the patches. #### Approach #1 (add-on) Install the Toggle Word Wrap add-on that is available from <https://addons.mozilla.org/thunderbird/addon/toggle-word-wrap/> It adds a menu entry "Enable Word Wrap" in the composer’s "Options" menu that you can tick off. Now you can compose the message as you otherwise do (cut + paste, `git format-patch` | `git imap-send`, etc), but you have to insert line breaks manually in any text that you type. #### Approach #2 (configuration) Three steps: 1. Configure your mail server composition as plain text: Edit…​Account Settings…​Composition & Addressing, uncheck "Compose Messages in HTML". 2. Configure your general composition window to not wrap. In Thunderbird 2: Edit..Preferences..Composition, wrap plain text messages at 0 In Thunderbird 3: Edit..Preferences..Advanced..Config Editor. Search for "mail.wrap\_long\_lines". Toggle it to make sure it is set to `false`. Also, search for "mailnews.wraplength" and set the value to 0. 3. Disable the use of format=flowed: Edit..Preferences..Advanced..Config Editor. Search for "mailnews.send\_plaintext\_flowed". Toggle it to make sure it is set to `false`. After that is done, you should be able to compose email as you otherwise would (cut + paste, `git format-patch` | `git imap-send`, etc), and the patches will not be mangled. #### Approach #3 (external editor) The following Thunderbird extensions are needed: AboutConfig from <http://aboutconfig.mozdev.org/> and External Editor from <http://globs.org/articles.php?lng=en&pg=8> 1. Prepare the patch as a text file using your method of choice. 2. Before opening a compose window, use Edit→Account Settings to uncheck the "Compose messages in HTML format" setting in the "Composition & Addressing" panel of the account to be used to send the patch. 3. In the main Thunderbird window, `before` you open the compose window for the patch, use Tools→about:config to set the following to the indicated values: ``` mailnews.send_plaintext_flowed => false mailnews.wraplength => 0 ``` 4. Open a compose window and click the external editor icon. 5. In the external editor window, read in the patch file and exit the editor normally. Side note: it may be possible to do step 2 with about:config and the following settings but no one’s tried yet. ``` mail.html_compose => false mail.identity.default.compose_html => false mail.identity.id?.compose_html => false ``` There is a script in contrib/thunderbird-patch-inline which can help you include patches with Thunderbird in an easy way. To use it, do the steps above and then use the script as the external editor. ### KMail This should help you to submit patches inline using KMail. 1. Prepare the patch as a text file. 2. Click on New Mail. 3. Go under "Options" in the Composer window and be sure that "Word wrap" is not set. 4. Use Message → Insert file…​ and insert the patch. 5. Back in the compose window: add whatever other text you wish to the message, complete the addressing and subject fields, and press send. Base tree information --------------------- The base tree information block is used for maintainers or third party testers to know the exact state the patch series applies to. It consists of the `base commit`, which is a well-known commit that is part of the stable part of the project history everybody else works off of, and zero or more `prerequisite patches`, which are well-known patches in flight that is not yet part of the `base commit` that need to be applied on top of `base commit` in topological order before the patches can be applied. The `base commit` is shown as "base-commit: " followed by the 40-hex of the commit object name. A `prerequisite patch` is shown as "prerequisite-patch-id: " followed by the 40-hex `patch id`, which can be obtained by passing the patch through the `git patch-id --stable` command. Imagine that on top of the public commit P, you applied well-known patches X, Y and Z from somebody else, and then built your three-patch series A, B, C, the history would be like: ``` ---P---X---Y---Z---A---B---C ``` With `git format-patch --base=P -3 C` (or variants thereof, e.g. with `--cover-letter` or using `Z..C` instead of `-3 C` to specify the range), the base tree information block is shown at the end of the first message the command outputs (either the first patch, or the cover letter), like this: ``` base-commit: P prerequisite-patch-id: X prerequisite-patch-id: Y prerequisite-patch-id: Z ``` For non-linear topology, such as ``` ---P---X---A---M---C \ / Y---Z---B ``` You can also use `git format-patch --base=P -3 C` to generate patches for A, B and C, and the identifiers for P, X, Y, Z are appended at the end of the first message. If set `--base=auto` in cmdline, it will automatically compute the base commit as the merge base of tip commit of the remote-tracking branch and revision-range specified in cmdline. For a local branch, you need to make it to track a remote branch by `git branch --set-upstream-to` before using this option. Examples -------- * Extract commits between revisions R1 and R2, and apply them on top of the current branch using `git am` to cherry-pick them: ``` $ git format-patch -k --stdout R1..R2 | git am -3 -k ``` * Extract all commits which are in the current branch but not in the origin branch: ``` $ git format-patch origin ``` For each commit a separate file is created in the current directory. * Extract all commits that lead to `origin` since the inception of the project: ``` $ git format-patch --root origin ``` * The same as the previous one: ``` $ git format-patch -M -B origin ``` Additionally, it detects and handles renames and complete rewrites intelligently to produce a renaming patch. A renaming patch reduces the amount of text output, and generally makes it easier to review. Note that non-Git "patch" programs won’t understand renaming patches, so use it only when you know the recipient uses Git to apply your patch. * Extract three topmost commits from the current branch and format them as e-mailable patches: ``` $ git format-patch -3 ``` Caveats ------- Note that `format-patch` will omit merge commits from the output, even if they are part of the requested range. A simple "patch" does not include enough information for the receiving end to reproduce the same merge commit. See also -------- [git-am[1]](git-am), [git-send-email[1]](git-send-email)
programming_docs
git git-apply git-apply ========= Name ---- git-apply - Apply a patch to files and/or to the index Synopsis -------- ``` git apply [--stat] [--numstat] [--summary] [--check] [--index | --intent-to-add] [--3way] [--apply] [--no-add] [--build-fake-ancestor=<file>] [-R | --reverse] [--allow-binary-replacement | --binary] [--reject] [-z] [-p<n>] [-C<n>] [--inaccurate-eof] [--recount] [--cached] [--ignore-space-change | --ignore-whitespace] [--whitespace=(nowarn|warn|fix|error|error-all)] [--exclude=<path>] [--include=<path>] [--directory=<root>] [--verbose | --quiet] [--unsafe-paths] [--allow-empty] [<patch>…​] ``` Description ----------- Reads the supplied diff output (i.e. "a patch") and applies it to files. When running from a subdirectory in a repository, patched paths outside the directory are ignored. With the `--index` option the patch is also applied to the index, and with the `--cached` option the patch is only applied to the index. Without these options, the command applies the patch only to files, and does not require them to be in a Git repository. This command applies the patch but does not create a commit. Use [git-am[1]](git-am) to create commits from patches generated by [git-format-patch[1]](git-format-patch) and/or received by email. Options ------- <patch>…​ The files to read the patch from. `-` can be used to read from the standard input. --stat Instead of applying the patch, output diffstat for the input. Turns off "apply". --numstat Similar to `--stat`, but shows the number of added and deleted lines in decimal notation and the pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. Turns off "apply". --summary Instead of applying the patch, output a condensed summary of information obtained from git diff extended headers, such as creations, renames and mode changes. Turns off "apply". --check Instead of applying the patch, see if the patch is applicable to the current working tree and/or the index file and detects errors. Turns off "apply". --index Apply the patch to both the index and the working tree (or merely check that it would apply cleanly to both if `--check` is in effect). Note that `--index` expects index entries and working tree copies for relevant paths to be identical (their contents and metadata such as file mode must match), and will raise an error if they are not, even if the patch would apply cleanly to both the index and the working tree in isolation. --cached Apply the patch to just the index, without touching the working tree. If `--check` is in effect, merely check that it would apply cleanly to the index entry. --intent-to-add When applying the patch only to the working tree, mark new files to be added to the index later (see `--intent-to-add` option in [git-add[1]](git-add)). This option is ignored unless running in a Git repository and `--index` is not specified. Note that `--index` could be implied by other options such as `--cached` or `--3way`. -3 --3way Attempt 3-way merge if the patch records the identity of blobs it is supposed to apply to and we have those blobs available locally, possibly leaving the conflict markers in the files in the working tree for the user to resolve. This option implies the `--index` option unless the `--cached` option is used, and is incompatible with the `--reject` option. When used with the `--cached` option, any conflicts are left at higher stages in the cache. --build-fake-ancestor=<file> Newer `git diff` output has embedded `index information` for each blob to help identify the original version that the patch applies to. When this flag is given, and if the original versions of the blobs are available locally, builds a temporary index containing those blobs. When a pure mode change is encountered (which has no index information), the information is read from the current index instead. -R --reverse Apply the patch in reverse. --reject For atomicity, `git apply` by default fails the whole patch and does not touch the working tree when some of the hunks do not apply. This option makes it apply the parts of the patch that are applicable, and leave the rejected hunks in corresponding \*.rej files. -z When `--numstat` has been given, do not munge pathnames, but use a NUL-terminated machine-readable format. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). -p<n> Remove <n> leading path components (separated by slashes) from traditional diff paths. E.g., with `-p2`, a patch against `a/dir/file` will be applied directly to `file`. The default is 1. -C<n> Ensure at least <n> lines of surrounding context match before and after each change. When fewer lines of surrounding context exist they all must match. By default no context is ever ignored. --unidiff-zero By default, `git apply` expects that the patch being applied is a unified diff with at least one line of context. This provides good safety measures, but breaks down when applying a diff generated with `--unified=0`. To bypass these checks use `--unidiff-zero`. Note, for the reasons stated above usage of context-free patches is discouraged. --apply If you use any of the options marked "Turns off `apply`" above, `git apply` reads and outputs the requested information without actually applying the patch. Give this flag after those flags to also apply the patch. --no-add When applying a patch, ignore additions made by the patch. This can be used to extract the common part between two files by first running `diff` on them and applying the result with this option, which would apply the deletion part but not the addition part. --allow-binary-replacement --binary Historically we did not allow binary patch applied without an explicit permission from the user, and this flag was the way to do so. Currently we always allow binary patch application, so this is a no-op. --exclude=<path-pattern> Don’t apply changes to files matching the given path pattern. This can be useful when importing patchsets, where you want to exclude certain files or directories. --include=<path-pattern> Apply changes to files matching the given path pattern. This can be useful when importing patchsets, where you want to include certain files or directories. When `--exclude` and `--include` patterns are used, they are examined in the order they appear on the command line, and the first match determines if a patch to each path is used. A patch to a path that does not match any include/exclude pattern is used by default if there is no include pattern on the command line, and ignored if there is any include pattern. --ignore-space-change --ignore-whitespace When applying a patch, ignore changes in whitespace in context lines if necessary. Context lines will preserve their whitespace, and they will not undergo whitespace fixing regardless of the value of the `--whitespace` option. New lines will still be fixed, though. --whitespace=<action> When applying a patch, detect a new or modified line that has whitespace errors. What are considered whitespace errors is controlled by `core.whitespace` configuration. By default, trailing whitespaces (including lines that solely consist of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. By default, the command outputs warning messages but applies the patch. When `git-apply` is used for statistics and not applying a patch, it defaults to `nowarn`. You can use different `<action>` values to control this behavior: * `nowarn` turns off the trailing whitespace warning. * `warn` outputs warnings for a few such errors, but applies the patch as-is (default). * `fix` outputs warnings for a few such errors, and applies the patch after fixing them (`strip` is a synonym --- the tool used to consider only trailing whitespace characters as errors, and the fix involved `stripping` them, but modern Gits do more). * `error` outputs warnings for a few such errors, and refuses to apply the patch. * `error-all` is similar to `error` but shows all errors. --inaccurate-eof Under certain circumstances, some versions of `diff` do not correctly detect a missing new-line at the end of the file. As a result, patches created by such `diff` programs do not record incomplete lines correctly. This option adds support for applying such patches by working around this bug. -v --verbose Report progress to stderr. By default, only a message about the current patch being applied will be printed. This option will cause additional information to be reported. -q --quiet Suppress stderr output. Messages about patch status and progress will not be printed. --recount Do not trust the line counts in the hunk headers, but infer them by inspecting the patch (e.g. after editing the patch without adjusting the hunk headers appropriately). --directory=<root> Prepend <root> to all filenames. If a "-p" argument was also passed, it is applied before prepending the new root. For example, a patch that talks about updating `a/git-gui.sh` to `b/git-gui.sh` can be applied to the file in the working tree `modules/git-gui/git-gui.sh` by running `git apply --directory=modules/git-gui`. --unsafe-paths By default, a patch that affects outside the working area (either a Git controlled working tree, or the current working directory when "git apply" is used as a replacement of GNU patch) is rejected as a mistake (or a mischief). When `git apply` is used as a "better GNU patch", the user can pass the `--unsafe-paths` option to override this safety check. This option has no effect when `--index` or `--cached` is in use. --allow-empty Don’t return error for patches containing no diff. This includes empty patches and patches with commit text only. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: apply.ignoreWhitespace When set to `change`, tells `git apply` to ignore changes in whitespace, in the same way as the `--ignore-space-change` option. When set to one of: no, none, never, false tells `git apply` to respect all whitespace differences. See [git-apply[1]](git-apply). apply.whitespace Tells `git apply` how to handle whitespaces, in the same way as the `--whitespace` option. See [git-apply[1]](git-apply). Submodules ---------- If the patch contains any changes to submodules then `git apply` treats these changes as follows. If `--index` is specified (explicitly or implicitly), then the submodule commits must match the index exactly for the patch to apply. If any of the submodules are checked-out, then these check-outs are completely ignored, i.e., they are not required to be up to date or clean and they are not updated. If `--index` is not specified, then the submodule commits in the patch are ignored and only the absence or presence of the corresponding subdirectory is checked and (if possible) updated. See also -------- [git-am[1]](git-am). git git-name-rev git-name-rev ============ Name ---- git-name-rev - Find symbolic names for given revs Synopsis -------- ``` git name-rev [--tags] [--refs=<pattern>] ( --all | --stdin | <commit-ish>…​ ) ``` Description ----------- Finds symbolic names suitable for human digestion for revisions given in any format parsable by `git rev-parse`. Options ------- --tags Do not use branch names, but only tags to name the commits --refs=<pattern> Only use refs whose names match a given shell pattern. The pattern can be one of branch name, tag name or fully qualified ref name. If given multiple times, use refs whose names match any of the given shell patterns. Use `--no-refs` to clear any previous ref patterns given. --exclude=<pattern> Do not use any ref whose name matches a given shell pattern. The pattern can be one of branch name, tag name or fully qualified ref name. If given multiple times, a ref will be excluded when it matches any of the given patterns. When used together with --refs, a ref will be used as a match only when it matches at least one --refs pattern and does not match any --exclude patterns. Use `--no-exclude` to clear the list of exclude patterns. --all List all commits reachable from all refs --annotate-stdin Transform stdin by substituting all the 40-character SHA-1 hexes (say $hex) with "$hex ($rev\_name)". When used with --name-only, substitute with "$rev\_name", omitting $hex altogether. For example: ``` $ cat sample.txt An abbreviated revision 2ae0a9cb82 will not be substituted. The full name after substitution is 2ae0a9cb8298185a94e5998086f380a355dd8907, while its tree object is 70d105cc79e63b81cfdcb08a15297c23e60b07ad $ git name-rev --annotate-stdin <sample.txt An abbreviated revision 2ae0a9cb82 will not be substituted. The full name after substitution is 2ae0a9cb8298185a94e5998086f380a355dd8907 (master), while its tree object is 70d105cc79e63b81cfdcb08a15297c23e60b07ad $ git name-rev --name-only --annotate-stdin <sample.txt An abbreviated revision 2ae0a9cb82 will not be substituted. The full name after substitution is master, while its tree object is 70d105cc79e63b81cfdcb08a15297c23e60b07ad ``` --stdin This option is deprecated in favor of `git name-rev --annotate-stdin`. They are functionally equivalent. --name-only Instead of printing both the SHA-1 and the name, print only the name. If given with --tags the usual tag prefix of "tags/" is also omitted from the name, matching the output of `git-describe` more closely. --no-undefined Die with error code != 0 when a reference is undefined, instead of printing `undefined`. --always Show uniquely abbreviated commit object as fallback. Examples -------- Given a commit, find out where it is relative to the local refs. Say somebody wrote you about that fantastic commit 33db5f4d9027a10e477ccf054b2c1ab94f74c85a. Of course, you look into the commit, but that only tells you what happened, but not the context. Enter `git name-rev`: ``` % git name-rev 33db5f4d9027a10e477ccf054b2c1ab94f74c85a 33db5f4d9027a10e477ccf054b2c1ab94f74c85a tags/v0.99~940 ``` Now you are wiser, because you know that it happened 940 revisions before v0.99. Another nice thing you can do is: ``` % git log | git name-rev --stdin ``` git git-diagnose git-diagnose ============ Name ---- git-diagnose - Generate a zip archive of diagnostic information Synopsis -------- ``` git diagnose [(-o | --output-directory) <path>] [(-s | --suffix) <format>] [--mode=<mode>] ``` Description ----------- Collects detailed information about the user’s machine, Git client, and repository state and packages that information into a zip archive. The generated archive can then, for example, be shared with the Git mailing list to help debug an issue or serve as a reference for independent debugging. By default, the following information is captured in the archive: * `git version --build-options` * The path to the repository root * The available disk space on the filesystem * The name and size of each packfile, including those in alternate object stores * The total count of loose objects, as well as counts broken down by `.git/objects` subdirectory Additional information can be collected by selecting a different diagnostic mode using the `--mode` option. This tool differs from [git-bugreport[1]](git-bugreport) in that it collects much more detailed information with a greater focus on reporting the size and data shape of repository contents. Options ------- -o <path> --output-directory <path> Place the resulting diagnostics archive in `<path>` instead of the current directory. -s <format> --suffix <format> Specify an alternate suffix for the diagnostics archive name, to create a file named `git-diagnostics-<formatted suffix>`. This should take the form of a strftime(3) format string; the current local time will be used. --mode=(stats|all) Specify the type of diagnostics that should be collected. The default behavior of `git diagnose` is equivalent to `--mode=stats`. The `--mode=all` option collects everything included in `--mode=stats`, as well as copies of `.git`, `.git/hooks`, `.git/info`, `.git/logs`, and `.git/objects/info` directories. This additional information may be sensitive, as it can be used to reconstruct the full contents of the diagnosed repository. Users should exercise caution when sharing an archive generated with `--mode=all`. git git-archimport git-archimport ============== Name ---- git-archimport - Import a GNU Arch repository into Git Synopsis -------- ``` git archimport [-h] [-v] [-o] [-a] [-f] [-T] [-D <depth>] [-t <tempdir>] <archive>/<branch>[:<git-branch>]…​ ``` Description ----------- Imports a project from one or more GNU Arch repositories. It will follow branches and repositories within the namespaces defined by the <archive>/<branch> parameters supplied. If it cannot find the remote branch a merge comes from it will just import it as a regular commit. If it can find it, it will mark it as a merge whenever possible (see discussion below). The script expects you to provide the key roots where it can start the import from an `initial import` or `tag` type of Arch commit. It will follow and import new branches within the provided roots. It expects to be dealing with one project only. If it sees branches that have different roots, it will refuse to run. In that case, edit your <archive>/<branch> parameters to define clearly the scope of the import. `git archimport` uses `tla` extensively in the background to access the Arch repository. Make sure you have a recent version of `tla` available in the path. `tla` must know about the repositories you pass to `git archimport`. For the initial import, `git archimport` expects to find itself in an empty directory. To follow the development of a project that uses Arch, rerun `git archimport` with the same parameters as the initial import to perform incremental imports. While `git archimport` will try to create sensible branch names for the archives that it imports, it is also possible to specify Git branch names manually. To do so, write a Git branch name after each <archive>/<branch> parameter, separated by a colon. This way, you can shorten the Arch branch names and convert Arch jargon to Git jargon, for example mapping a "PROJECT--devo--VERSION" branch to "master". Associating multiple Arch branches to one Git branch is possible; the result will make the most sense only if no commits are made to the first branch, after the second branch is created. Still, this is useful to convert Arch repositories that had been rotated periodically. Merges ------ Patch merge data from Arch is used to mark merges in Git as well. Git does not care much about tracking patches, and only considers a merge when a branch incorporates all the commits since the point they forked. The end result is that Git will have a good idea of how far branches have diverged. So the import process does lose some patch-trading metadata. Fortunately, when you try and merge branches imported from Arch, Git will find a good merge base, and it has a good chance of identifying patches that have been traded out-of-sequence between the branches. Options ------- -h Display usage. -v Verbose output. -T Many tags. Will create a tag for every commit, reflecting the commit name in the Arch repository. -f Use the fast patchset import strategy. This can be significantly faster for large trees, but cannot handle directory renames or permissions changes. The default strategy is slow and safe. -o Use this for compatibility with old-style branch names used by earlier versions of `git archimport`. Old-style branch names were category--branch, whereas new-style branch names are archive,category--branch--version. In both cases, names given on the command-line will override the automatically-generated ones. -D <depth> Follow merge ancestry and attempt to import trees that have been merged from. Specify a depth greater than 1 if patch logs have been pruned. -a Attempt to auto-register archives at `http://mirrors.sourcecontrol.net` This is particularly useful with the -D option. -t <tmpdir> Override the default tempdir. <archive>/<branch> <archive>/<branch> identifier in a format that `tla log` understands.
programming_docs
git gitcvs-migration gitcvs-migration ================ Name ---- gitcvs-migration - Git for CVS users Synopsis -------- ``` git cvsimport * ``` Description ----------- Git differs from CVS in that every working tree contains a repository with a full copy of the project history, and no repository is inherently more important than any other. However, you can emulate the CVS model by designating a single shared repository which people can synchronize with; this document explains how to do that. Some basic familiarity with Git is required. Having gone through [gittutorial[7]](gittutorial) and [gitglossary[7]](gitglossary) should be sufficient. Developing against a shared repository -------------------------------------- Suppose a shared repository is set up in /pub/repo.git on the host foo.com. Then as an individual committer you can clone the shared repository over ssh with: ``` $ git clone foo.com:/pub/repo.git/ my-project $ cd my-project ``` and hack away. The equivalent of `cvs update` is ``` $ git pull origin ``` which merges in any work that others might have done since the clone operation. If there are uncommitted changes in your working tree, commit them first before running git pull. | | | | --- | --- | | Note | The `pull` command knows where to get updates from because of certain configuration variables that were set by the first `git clone` command; see `git config -l` and the [git-config[1]](git-config) man page for details. | You can update the shared repository with your changes by first committing your changes, and then using the `git push` command: ``` $ git push origin master ``` to "push" those commits to the shared repository. If someone else has updated the repository more recently, `git push`, like `cvs commit`, will complain, in which case you must pull any changes before attempting the push again. In the `git push` command above we specify the name of the remote branch to update (`master`). If we leave that out, `git push` tries to update any branches in the remote repository that have the same name as a branch in the local repository. So the last `push` can be done with either of: ``` $ git push origin $ git push foo.com:/pub/project.git/ ``` as long as the shared repository does not have any branches other than `master`. Setting up a shared repository ------------------------------ We assume you have already created a Git repository for your project, possibly created from scratch or from a tarball (see [gittutorial[7]](gittutorial)), or imported from an already existing CVS repository (see the next section). Assume your existing repo is at /home/alice/myproject. Create a new "bare" repository (a repository without a working tree) and fetch your project into it: ``` $ mkdir /pub/my-repo.git $ cd /pub/my-repo.git $ git --bare init --shared $ git --bare fetch /home/alice/myproject master:master ``` Next, give every team member read/write access to this repository. One easy way to do this is to give all the team members ssh access to the machine where the repository is hosted. If you don’t want to give them a full shell on the machine, there is a restricted shell which only allows users to do Git pushes and pulls; see [git-shell[1]](git-shell). Put all the committers in the same group, and make the repository writable by that group: ``` $ chgrp -R $group /pub/my-repo.git ``` Make sure committers have a umask of at most 027, so that the directories they create are writable and searchable by other group members. Importing a cvs archive ----------------------- | | | | --- | --- | | Note | These instructions use the `git-cvsimport` script which ships with git, but other importers may provide better results. See the note in [git-cvsimport[1]](git-cvsimport) for other options. | First, install version 2.1 or higher of cvsps from <https://github.com/andreyvit/cvsps> and make sure it is in your path. Then cd to a checked out CVS working directory of the project you are interested in and run [git-cvsimport[1]](git-cvsimport): ``` $ git cvsimport -C <destination> <module> ``` This puts a Git archive of the named CVS module in the directory <destination>, which will be created if necessary. The import checks out from CVS every revision of every file. Reportedly cvsimport can average some twenty revisions per second, so for a medium-sized project this should not take more than a couple of minutes. Larger projects or remote repositories may take longer. The main trunk is stored in the Git branch named `origin`, and additional CVS branches are stored in Git branches with the same names. The most recent version of the main trunk is also left checked out on the `master` branch, so you can start adding your own changes right away. The import is incremental, so if you call it again next month it will fetch any CVS updates that have been made in the meantime. For this to work, you must not modify the imported branches; instead, create new branches for your own changes, and merge in the imported branches as necessary. If you want a shared repository, you will need to make a bare clone of the imported directory, as described above. Then treat the imported directory as another development clone for purposes of merging incremental imports. Advanced shared repository management ------------------------------------- Git allows you to specify scripts called "hooks" to be run at certain points. You can use these, for example, to send all commits to the shared repository to a mailing list. See [githooks[5]](githooks). You can enforce finer grained permissions using update hooks. See [Controlling access to branches using update hooks](https://git-scm.com/docs/howto/update-hook-example). Providing cvs access to a git repository ---------------------------------------- It is also possible to provide true CVS access to a Git repository, so that developers can still use CVS; see [git-cvsserver[1]](git-cvsserver) for details. Alternative development models ------------------------------ CVS users are accustomed to giving a group of developers commit access to a common repository. As we’ve seen, this is also possible with Git. However, the distributed nature of Git allows other development models, and you may want to first consider whether one of them might be a better fit for your project. For example, you can choose a single person to maintain the project’s primary public repository. Other developers then clone this repository and each work in their own clone. When they have a series of changes that they’re happy with, they ask the maintainer to pull from the branch containing the changes. The maintainer reviews their changes and pulls them into the primary repository, which other developers pull from as necessary to stay coordinated. The Linux kernel and other projects use variants of this model. With a small group, developers may just pull changes from each other’s repositories without the need for a central maintainer. See also -------- [gittutorial[7]](gittutorial), [gittutorial-2[7]](gittutorial-2), [gitcore-tutorial[7]](gitcore-tutorial), [gitglossary[7]](gitglossary), [giteveryday[7]](giteveryday), [The Git User’s Manual](user-manual) git git-send-email git-send-email ============== Name ---- git-send-email - Send a collection of patches as emails Synopsis -------- ``` git send-email [<options>] <file|directory>…​ git send-email [<options>] <format-patch options> git send-email --dump-aliases ``` Description ----------- Takes the patches given on the command line and emails them out. Patches can be specified as files, directories (which will send all files in the directory), or directly as a revision list. In the last case, any format accepted by [git-format-patch[1]](git-format-patch) can be passed to git send-email, as well as options understood by [git-format-patch[1]](git-format-patch). The header of the email is configurable via command-line options. If not specified on the command line, the user will be prompted with a ReadLine enabled interface to provide the necessary information. There are two formats accepted for patch files: 1. mbox format files This is what [git-format-patch[1]](git-format-patch) generates. Most headers and MIME formatting are ignored. 2. The original format used by Greg Kroah-Hartman’s `send_lots_of_email.pl` script This format expects the first line of the file to contain the "Cc:" value and the "Subject:" of the message as the second line. Options ------- ### Composing --annotate Review and edit each patch you’re about to send. Default is the value of `sendemail.annotate`. See the CONFIGURATION section for `sendemail.multiEdit`. --bcc=<address>,…​ Specify a "Bcc:" value for each email. Default is the value of `sendemail.bcc`. This option may be specified multiple times. --cc=<address>,…​ Specify a starting "Cc:" value for each email. Default is the value of `sendemail.cc`. This option may be specified multiple times. --compose Invoke a text editor (see GIT\_EDITOR in [git-var[1]](git-var)) to edit an introductory message for the patch series. When `--compose` is used, git send-email will use the From, Subject, and In-Reply-To headers specified in the message. If the body of the message (what you type after the headers and a blank line) only contains blank (or Git: prefixed) lines, the summary won’t be sent, but From, Subject, and In-Reply-To headers will be used unless they are removed. Missing From or In-Reply-To headers will be prompted for. See the CONFIGURATION section for `sendemail.multiEdit`. --from=<address> Specify the sender of the emails. If not specified on the command line, the value of the `sendemail.from` configuration option is used. If neither the command-line option nor `sendemail.from` are set, then the user will be prompted for the value. The default for the prompt will be the value of GIT\_AUTHOR\_IDENT, or GIT\_COMMITTER\_IDENT if that is not set, as returned by "git var -l". --reply-to=<address> Specify the address where replies from recipients should go to. Use this if replies to messages should go to another address than what is specified with the --from parameter. --in-reply-to=<identifier> Make the first mail (or all the mails with `--no-thread`) appear as a reply to the given Message-Id, which avoids breaking threads to provide a new patch series. The second and subsequent emails will be sent as replies according to the `--[no-]chain-reply-to` setting. So for example when `--thread` and `--no-chain-reply-to` are specified, the second and subsequent patches will be replies to the first one like in the illustration below where `[PATCH v2 0/3]` is in reply to `[PATCH 0/2]`: ``` [PATCH 0/2] Here is what I did... [PATCH 1/2] Clean up and tests [PATCH 2/2] Implementation [PATCH v2 0/3] Here is a reroll [PATCH v2 1/3] Clean up [PATCH v2 2/3] New tests [PATCH v2 3/3] Implementation ``` Only necessary if --compose is also set. If --compose is not set, this will be prompted for. --subject=<string> Specify the initial subject of the email thread. Only necessary if --compose is also set. If --compose is not set, this will be prompted for. --to=<address>,…​ Specify the primary recipient of the emails generated. Generally, this will be the upstream maintainer of the project involved. Default is the value of the `sendemail.to` configuration value; if that is unspecified, and --to-cmd is not specified, this will be prompted for. This option may be specified multiple times. --8bit-encoding=<encoding> When encountering a non-ASCII message or subject that does not declare its encoding, add headers/quoting to indicate it is encoded in <encoding>. Default is the value of the `sendemail.assume8bitEncoding`; if that is unspecified, this will be prompted for if any non-ASCII files are encountered. Note that no attempts whatsoever are made to validate the encoding. --compose-encoding=<encoding> Specify encoding of compose message. Default is the value of the `sendemail.composeencoding`; if that is unspecified, UTF-8 is assumed. --transfer-encoding=(7bit|8bit|quoted-printable|base64|auto) Specify the transfer encoding to be used to send the message over SMTP. 7bit will fail upon encountering a non-ASCII message. quoted-printable can be useful when the repository contains files that contain carriage returns, but makes the raw patch email file (as saved from a MUA) much harder to inspect manually. base64 is even more fool proof, but also even more opaque. auto will use 8bit when possible, and quoted-printable otherwise. Default is the value of the `sendemail.transferEncoding` configuration value; if that is unspecified, default to `auto`. --xmailer --no-xmailer Add (or prevent adding) the "X-Mailer:" header. By default, the header is added, but it can be turned off by setting the `sendemail.xmailer` configuration variable to `false`. ### Sending --envelope-sender=<address> Specify the envelope sender used to send the emails. This is useful if your default address is not the address that is subscribed to a list. In order to use the `From` address, set the value to "auto". If you use the sendmail binary, you must have suitable privileges for the -f parameter. Default is the value of the `sendemail.envelopeSender` configuration variable; if that is unspecified, choosing the envelope sender is left to your MTA. --sendmail-cmd=<command> Specify a command to run to send the email. The command should be sendmail-like; specifically, it must support the `-i` option. The command will be executed in the shell if necessary. Default is the value of `sendemail.sendmailcmd`. If unspecified, and if --smtp-server is also unspecified, git-send-email will search for `sendmail` in `/usr/sbin`, `/usr/lib` and $PATH. --smtp-encryption=<encryption> Specify in what way encrypting begins for the SMTP connection. Valid values are `ssl` and `tls`. Any other value reverts to plain (unencrypted) SMTP, which defaults to port 25. Despite the names, both values will use the same newer version of TLS, but for historic reasons have these names. `ssl` refers to "implicit" encryption (sometimes called SMTPS), that uses port 465 by default. `tls` refers to "explicit" encryption (often known as STARTTLS), that uses port 25 by default. Other ports might be used by the SMTP server, which are not the default. Commonly found alternative port for `tls` and unencrypted is 587. You need to check your provider’s documentation or your server configuration to make sure for your own case. Default is the value of `sendemail.smtpEncryption`. --smtp-domain=<FQDN> Specifies the Fully Qualified Domain Name (FQDN) used in the HELO/EHLO command to the SMTP server. Some servers require the FQDN to match your IP address. If not set, git send-email attempts to determine your FQDN automatically. Default is the value of `sendemail.smtpDomain`. --smtp-auth=<mechanisms> Whitespace-separated list of allowed SMTP-AUTH mechanisms. This setting forces using only the listed mechanisms. Example: ``` $ git send-email --smtp-auth="PLAIN LOGIN GSSAPI" ... ``` If at least one of the specified mechanisms matches the ones advertised by the SMTP server and if it is supported by the utilized SASL library, the mechanism is used for authentication. If neither `sendemail.smtpAuth` nor `--smtp-auth` is specified, all mechanisms supported by the SASL library can be used. The special value `none` maybe specified to completely disable authentication independently of `--smtp-user` --smtp-pass[=<password>] Password for SMTP-AUTH. The argument is optional: If no argument is specified, then the empty string is used as the password. Default is the value of `sendemail.smtpPass`, however `--smtp-pass` always overrides this value. Furthermore, passwords need not be specified in configuration files or on the command line. If a username has been specified (with `--smtp-user` or a `sendemail.smtpUser`), but no password has been specified (with `--smtp-pass` or `sendemail.smtpPass`), then a password is obtained using `git-credential`. --no-smtp-auth Disable SMTP authentication. Short hand for `--smtp-auth=none` --smtp-server=<host> If set, specifies the outgoing SMTP server to use (e.g. `smtp.example.com` or a raw IP address). If unspecified, and if `--sendmail-cmd` is also unspecified, the default is to search for `sendmail` in `/usr/sbin`, `/usr/lib` and $PATH if such a program is available, falling back to `localhost` otherwise. For backward compatibility, this option can also specify a full pathname of a sendmail-like program instead; the program must support the `-i` option. This method does not support passing arguments or using plain command names. For those use cases, consider using `--sendmail-cmd` instead. --smtp-server-port=<port> Specifies a port different from the default port (SMTP servers typically listen to smtp port 25, but may also listen to submission port 587, or the common SSL smtp port 465); symbolic port names (e.g. "submission" instead of 587) are also accepted. The port can also be set with the `sendemail.smtpServerPort` configuration variable. --smtp-server-option=<option> If set, specifies the outgoing SMTP server option to use. Default value can be specified by the `sendemail.smtpServerOption` configuration option. The --smtp-server-option option must be repeated for each option you want to pass to the server. Likewise, different lines in the configuration files must be used for each option. --smtp-ssl Legacy alias for `--smtp-encryption ssl`. --smtp-ssl-cert-path Path to a store of trusted CA certificates for SMTP SSL/TLS certificate validation (either a directory that has been processed by `c_rehash`, or a single file containing one or more PEM format certificates concatenated together: see verify(1) -CAfile and -CApath for more information on these). Set it to an empty string to disable certificate verification. Defaults to the value of the `sendemail.smtpsslcertpath` configuration variable, if set, or the backing SSL library’s compiled-in default otherwise (which should be the best choice on most platforms). --smtp-user=<user> Username for SMTP-AUTH. Default is the value of `sendemail.smtpUser`; if a username is not specified (with `--smtp-user` or `sendemail.smtpUser`), then authentication is not attempted. --smtp-debug=0|1 Enable (1) or disable (0) debug output. If enabled, SMTP commands and replies will be printed. Useful to debug TLS connection and authentication problems. --batch-size=<num> Some email servers (e.g. smtp.163.com) limit the number emails to be sent per session (connection) and this will lead to a failure when sending many messages. With this option, send-email will disconnect after sending $<num> messages and wait for a few seconds (see --relogin-delay) and reconnect, to work around such a limit. You may want to use some form of credential helper to avoid having to retype your password every time this happens. Defaults to the `sendemail.smtpBatchSize` configuration variable. --relogin-delay=<int> Waiting $<int> seconds before reconnecting to SMTP server. Used together with --batch-size option. Defaults to the `sendemail.smtpReloginDelay` configuration variable. ### Automating --no-[to|cc|bcc] Clears any list of "To:", "Cc:", "Bcc:" addresses previously set via config. --no-identity Clears the previously read value of `sendemail.identity` set via config, if any. --to-cmd=<command> Specify a command to execute once per patch file which should generate patch file specific "To:" entries. Output of this command must be single email address per line. Default is the value of `sendemail.tocmd` configuration value. --cc-cmd=<command> Specify a command to execute once per patch file which should generate patch file specific "Cc:" entries. Output of this command must be single email address per line. Default is the value of `sendemail.ccCmd` configuration value. --[no-]chain-reply-to If this is set, each email will be sent as a reply to the previous email sent. If disabled with "--no-chain-reply-to", all emails after the first will be sent as replies to the first email sent. When using this, it is recommended that the first file given be an overview of the entire patch series. Disabled by default, but the `sendemail.chainReplyTo` configuration variable can be used to enable it. --identity=<identity> A configuration identity. When given, causes values in the `sendemail.<identity>` subsection to take precedence over values in the `sendemail` section. The default identity is the value of `sendemail.identity`. --[no-]signed-off-by-cc If this is set, add emails found in the `Signed-off-by` trailer or Cc: lines to the cc list. Default is the value of `sendemail.signedoffbycc` configuration value; if that is unspecified, default to --signed-off-by-cc. --[no-]cc-cover If this is set, emails found in Cc: headers in the first patch of the series (typically the cover letter) are added to the cc list for each email set. Default is the value of `sendemail.cccover` configuration value; if that is unspecified, default to --no-cc-cover. --[no-]to-cover If this is set, emails found in To: headers in the first patch of the series (typically the cover letter) are added to the to list for each email set. Default is the value of `sendemail.tocover` configuration value; if that is unspecified, default to --no-to-cover. --suppress-cc=<category> Specify an additional category of recipients to suppress the auto-cc of: * `author` will avoid including the patch author. * `self` will avoid including the sender. * `cc` will avoid including anyone mentioned in Cc lines in the patch header except for self (use `self` for that). * `bodycc` will avoid including anyone mentioned in Cc lines in the patch body (commit message) except for self (use `self` for that). * `sob` will avoid including anyone mentioned in the Signed-off-by trailers except for self (use `self` for that). * `misc-by` will avoid including anyone mentioned in Acked-by, Reviewed-by, Tested-by and other "-by" lines in the patch body, except Signed-off-by (use `sob` for that). * `cccmd` will avoid running the --cc-cmd. * `body` is equivalent to `sob` + `bodycc` + `misc-by`. * `all` will suppress all auto cc values. Default is the value of `sendemail.suppresscc` configuration value; if that is unspecified, default to `self` if --suppress-from is specified, as well as `body` if --no-signed-off-cc is specified. --[no-]suppress-from If this is set, do not add the From: address to the cc: list. Default is the value of `sendemail.suppressFrom` configuration value; if that is unspecified, default to --no-suppress-from. --[no-]thread If this is set, the In-Reply-To and References headers will be added to each email sent. Whether each mail refers to the previous email (`deep` threading per `git format-patch` wording) or to the first email (`shallow` threading) is governed by "--[no-]chain-reply-to". If disabled with "--no-thread", those headers will not be added (unless specified with --in-reply-to). Default is the value of the `sendemail.thread` configuration value; if that is unspecified, default to --thread. It is up to the user to ensure that no In-Reply-To header already exists when `git send-email` is asked to add it (especially note that `git format-patch` can be configured to do the threading itself). Failure to do so may not produce the expected result in the recipient’s MUA. ### Administering --confirm=<mode> Confirm just before sending: * `always` will always confirm before sending * `never` will never confirm before sending * `cc` will confirm before sending when send-email has automatically added addresses from the patch to the Cc list * `compose` will confirm before sending the first message when using --compose. * `auto` is equivalent to `cc` + `compose` Default is the value of `sendemail.confirm` configuration value; if that is unspecified, default to `auto` unless any of the suppress options have been specified, in which case default to `compose`. --dry-run Do everything except actually send the emails. --[no-]format-patch When an argument may be understood either as a reference or as a file name, choose to understand it as a format-patch argument (`--format-patch`) or as a file name (`--no-format-patch`). By default, when such a conflict occurs, git send-email will fail. --quiet Make git-send-email less verbose. One line per email should be all that is output. --[no-]validate Perform sanity checks on patches. Currently, validation means the following: * Invoke the sendemail-validate hook if present (see [githooks[5]](githooks)). * Warn of patches that contain lines longer than 998 characters unless a suitable transfer encoding (`auto`, `base64`, or `quoted-printable`) is used; this is due to SMTP limits as described by <http://www.ietf.org/rfc/rfc5322.txt>. Default is the value of `sendemail.validate`; if this is not set, default to `--validate`. --force Send emails even if safety checks would prevent it. ### Information --dump-aliases Instead of the normal operation, dump the shorthand alias names from the configured alias file(s), one per line in alphabetical order. Note, this only includes the alias name and not its expanded email addresses. See `sendemail.aliasesfile` for more information about aliases. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: sendemail.identity A configuration identity. When given, causes values in the `sendemail.<identity>` subsection to take precedence over values in the `sendemail` section. The default identity is the value of `sendemail.identity`. sendemail.smtpEncryption See [git-send-email[1]](git-send-email) for description. Note that this setting is not subject to the `identity` mechanism. sendemail.smtpsslcertpath Path to ca-certificates (either a directory or a single file). Set it to an empty string to disable certificate verification. sendemail.<identity>.\* Identity-specific versions of the `sendemail.*` parameters found below, taking precedence over those when this identity is selected, through either the command-line or `sendemail.identity`. sendemail.multiEdit If true (default), a single editor instance will be spawned to edit files you have to edit (patches when `--annotate` is used, and the summary when `--compose` is used). If false, files will be edited one after the other, spawning a new editor each time. sendemail.confirm Sets the default for whether to confirm before sending. Must be one of `always`, `never`, `cc`, `compose`, or `auto`. See `--confirm` in the [git-send-email[1]](git-send-email) documentation for the meaning of these values. sendemail.aliasesFile To avoid typing long email addresses, point this to one or more email aliases files. You must also supply `sendemail.aliasFileType`. sendemail.aliasFileType Format of the file(s) specified in sendemail.aliasesFile. Must be one of `mutt`, `mailrc`, `pine`, `elm`, or `gnus`, or `sendmail`. What an alias file in each format looks like can be found in the documentation of the email program of the same name. The differences and limitations from the standard formats are described below: sendmail * Quoted aliases and quoted addresses are not supported: lines that contain a `"` symbol are ignored. * Redirection to a file (`/path/name`) or pipe (`|command`) is not supported. * File inclusion (`:include: /path/name`) is not supported. * Warnings are printed on the standard error output for any explicitly unsupported constructs, and any other lines that are not recognized by the parser. sendemail.annotate sendemail.bcc sendemail.cc sendemail.ccCmd sendemail.chainReplyTo sendemail.envelopeSender sendemail.from sendemail.signedoffbycc sendemail.smtpPass sendemail.suppresscc sendemail.suppressFrom sendemail.to sendemail.tocmd sendemail.smtpDomain sendemail.smtpServer sendemail.smtpServerPort sendemail.smtpServerOption sendemail.smtpUser sendemail.thread sendemail.transferEncoding sendemail.validate sendemail.xmailer These configuration variables all provide a default for [git-send-email[1]](git-send-email) command-line options. See its documentation for details. sendemail.signedoffcc (deprecated) Deprecated alias for `sendemail.signedoffbycc`. sendemail.smtpBatchSize Number of messages to be sent per connection, after that a relogin will happen. If the value is 0 or undefined, send all messages in one connection. See also the `--batch-size` option of [git-send-email[1]](git-send-email). sendemail.smtpReloginDelay Seconds wait before reconnecting to smtp server. See also the `--relogin-delay` option of [git-send-email[1]](git-send-email). sendemail.forbidSendmailVariables To avoid common misconfiguration mistakes, [git-send-email[1]](git-send-email) will abort with a warning if any configuration options for "sendmail" exist. Set this variable to bypass the check. Examples -------- ### Use gmail as the smtp server To use `git send-email` to send your patches through the GMail SMTP server, edit ~/.gitconfig to specify your account settings: ``` [sendemail] smtpEncryption = tls smtpServer = smtp.gmail.com smtpUser = [email protected] smtpServerPort = 587 ``` If you have multi-factor authentication set up on your Gmail account, you will need to generate an app-specific password for use with `git send-email`. Visit <https://security.google.com/settings/security/apppasswords> to create it. If you do not have multi-factor authentication set up on your Gmail account, you will need to allow less secure app access. Visit <https://myaccount.google.com/lesssecureapps> to enable it. Once your commits are ready to be sent to the mailing list, run the following commands: ``` $ git format-patch --cover-letter -M origin/master -o outgoing/ $ edit outgoing/0000-* $ git send-email outgoing/* ``` The first time you run it, you will be prompted for your credentials. Enter the app-specific or your regular password as appropriate. If you have credential helper configured (see [git-credential[1]](git-credential)), the password will be saved in the credential store so you won’t have to type it the next time. Note: the following core Perl modules that may be installed with your distribution of Perl are required: MIME::Base64, MIME::QuotedPrint, Net::Domain and Net::SMTP. These additional Perl modules are also required: Authen::SASL and Mail::Address. See also -------- [git-format-patch[1]](git-format-patch), [git-imap-send[1]](git-imap-send), mbox(5)
programming_docs
git git-cvsserver git-cvsserver ============= Name ---- git-cvsserver - A CVS server emulator for Git Synopsis -------- SSH: ``` export CVS_SERVER="git cvsserver" cvs -d :ext:user@server/path/repo.git co <HEAD_name> ``` pserver (/etc/inetd.conf): ``` cvspserver stream tcp nowait nobody /usr/bin/git-cvsserver git-cvsserver pserver ``` Usage: ``` git-cvsserver [<options>] [pserver|server] [<directory> …​] ``` Description ----------- This application is a CVS emulation layer for Git. It is highly functional. However, not all methods are implemented, and for those methods that are implemented, not all switches are implemented. Testing has been done using both the CLI CVS client, and the Eclipse CVS plugin. Most functionality works fine with both of these clients. Options ------- All these options obviously only make sense if enforced by the server side. They have been implemented to resemble the [git-daemon[1]](git-daemon) options as closely as possible. --base-path <path> Prepend `path` to requested CVSROOT --strict-paths Don’t allow recursing into subdirectories --export-all Don’t check for `gitcvs.enabled` in config. You also have to specify a list of allowed directories (see below) if you want to use this option. -V --version Print version information and exit -h -H --help Print usage information and exit <directory> The remaining arguments provide a list of directories. If no directories are given, then all are allowed. Repositories within these directories still require the `gitcvs.enabled` config option, unless `--export-all` is specified. Limitations ----------- CVS clients cannot tag, branch or perform Git merges. `git-cvsserver` maps Git branches to CVS modules. This is very different from what most CVS users would expect since in CVS modules usually represent one or more directories. Installation ------------ 1. If you are going to offer CVS access via pserver, add a line in /etc/inetd.conf like ``` cvspserver stream tcp nowait nobody git-cvsserver pserver ``` Note: Some inetd servers let you specify the name of the executable independently of the value of argv[0] (i.e. the name the program assumes it was executed with). In this case the correct line in /etc/inetd.conf looks like ``` cvspserver stream tcp nowait nobody /usr/bin/git-cvsserver git-cvsserver pserver ``` Only anonymous access is provided by pserver by default. To commit you will have to create pserver accounts, simply add a gitcvs.authdb setting in the config file of the repositories you want the cvsserver to allow writes to, for example: ``` [gitcvs] authdb = /etc/cvsserver/passwd ``` The format of these files is username followed by the encrypted password, for example: ``` myuser:sqkNi8zPf01HI myuser:$1$9K7FzU28$VfF6EoPYCJEYcVQwATgOP/ myuser:$5$.NqmNH1vwfzGpV8B$znZIcumu1tNLATgV2l6e1/mY8RzhUDHMOaVOeL1cxV3 ``` You can use the `htpasswd` facility that comes with Apache to make these files, but only with the -d option (or -B if your system suports it). Preferably use the system specific utility that manages password hash creation in your platform (e.g. mkpasswd in Linux, encrypt in OpenBSD or pwhash in NetBSD) and paste it in the right location. Then provide your password via the pserver method, for example: ``` cvs -d:pserver:someuser:somepassword@server:/path/repo.git co <HEAD_name> ``` No special setup is needed for SSH access, other than having Git tools in the PATH. If you have clients that do not accept the CVS\_SERVER environment variable, you can rename `git-cvsserver` to `cvs`. Note: Newer CVS versions (>= 1.12.11) also support specifying CVS\_SERVER directly in CVSROOT like ``` cvs -d ":ext;CVS_SERVER=git cvsserver:user@server/path/repo.git" co <HEAD_name> ``` This has the advantage that it will be saved in your `CVS/Root` files and you don’t need to worry about always setting the correct environment variable. SSH users restricted to `git-shell` don’t need to override the default with CVS\_SERVER (and shouldn’t) as `git-shell` understands `cvs` to mean `git-cvsserver` and pretends that the other end runs the real `cvs` better. 2. For each repo that you want accessible from CVS you need to edit config in the repo and add the following section. ``` [gitcvs] enabled=1 # optional for debugging logFile=/path/to/logfile ``` Note: you need to ensure each user that is going to invoke `git-cvsserver` has write access to the log file and to the database (see [Database Backend](#dbbackend). If you want to offer write access over SSH, the users of course also need write access to the Git repository itself. You also need to ensure that each repository is "bare" (without a Git index file) for `cvs commit` to work. See [gitcvs-migration[7]](gitcvs-migration). All configuration variables can also be overridden for a specific method of access. Valid method names are "ext" (for SSH access) and "pserver". The following example configuration would disable pserver access while still allowing access over SSH. ``` [gitcvs] enabled=0 [gitcvs "ext"] enabled=1 ``` 3. If you didn’t specify the CVSROOT/CVS\_SERVER directly in the checkout command, automatically saving it in your `CVS/Root` files, then you need to set them explicitly in your environment. CVSROOT should be set as per normal, but the directory should point at the appropriate Git repo. As above, for SSH clients `not` restricted to `git-shell`, CVS\_SERVER should be set to `git-cvsserver`. ``` export CVSROOT=:ext:user@server:/var/git/project.git export CVS_SERVER="git cvsserver" ``` 4. For SSH clients that will make commits, make sure their server-side .ssh/environment files (or .bashrc, etc., according to their specific shell) export appropriate values for GIT\_AUTHOR\_NAME, GIT\_AUTHOR\_EMAIL, GIT\_COMMITTER\_NAME, and GIT\_COMMITTER\_EMAIL. For SSH clients whose login shell is bash, .bashrc may be a reasonable alternative. 5. Clients should now be able to check out the project. Use the CVS `module` name to indicate what Git `head` you want to check out. This also sets the name of your newly checked-out directory, unless you tell it otherwise with `-d <dir_name>`. For example, this checks out `master` branch to the `project-master` directory: ``` cvs co -d project-master master ``` Database backend ---------------- `git-cvsserver` uses one database per Git head (i.e. CVS module) to store information about the repository to maintain consistent CVS revision numbers. The database needs to be updated (i.e. written to) after every commit. If the commit is done directly by using `git` (as opposed to using `git-cvsserver`) the update will need to happen on the next repository access by `git-cvsserver`, independent of access method and requested operation. That means that even if you offer only read access (e.g. by using the pserver method), `git-cvsserver` should have write access to the database to work reliably (otherwise you need to make sure that the database is up to date any time `git-cvsserver` is executed). By default it uses SQLite databases in the Git directory, named `gitcvs.<module_name>.sqlite`. Note that the SQLite backend creates temporary files in the same directory as the database file on write so it might not be enough to grant the users using `git-cvsserver` write access to the database file without granting them write access to the directory, too. The database cannot be reliably regenerated in a consistent form after the branch it is tracking has changed. Example: For merged branches, `git-cvsserver` only tracks one branch of development, and after a `git merge` an incrementally updated database may track a different branch than a database regenerated from scratch, causing inconsistent CVS revision numbers. `git-cvsserver` has no way of knowing which branch it would have picked if it had been run incrementally pre-merge. So if you have to fully or partially (from old backup) regenerate the database, you should be suspicious of pre-existing CVS sandboxes. You can configure the database backend with the following configuration variables: ### Configuring database backend `git-cvsserver` uses the Perl DBI module. Please also read its documentation if changing these variables, especially about `DBI->connect()`. gitcvs.dbName Database name. The exact meaning depends on the selected database driver, for SQLite this is a filename. Supports variable substitution (see below). May not contain semicolons (`;`). Default: `%Ggitcvs.%m.sqlite` gitcvs.dbDriver Used DBI driver. You can specify any available driver for this here, but it might not work. cvsserver is tested with `DBD::SQLite`, reported to work with `DBD::Pg`, and reported **not** to work with `DBD::mysql`. Please regard this as an experimental feature. May not contain colons (`:`). Default: `SQLite` gitcvs.dbuser Database user. Only useful if setting `dbDriver`, since SQLite has no concept of database users. Supports variable substitution (see below). gitcvs.dbPass Database password. Only useful if setting `dbDriver`, since SQLite has no concept of database passwords. gitcvs.dbTableNamePrefix Database table name prefix. Supports variable substitution (see below). Any non-alphabetic characters will be replaced with underscores. All variables can also be set per access method, see [above](#configaccessmethod). #### Variable substitution In `dbDriver` and `dbUser` you can use the following variables: %G Git directory name %g Git directory name, where all characters except for alphanumeric ones, `.`, and `-` are replaced with `_` (this should make it easier to use the directory name in a filename if wanted) %m CVS module/Git head name %a access method (one of "ext" or "pserver") %u Name of the user running `git-cvsserver`. If no name can be determined, the numeric uid is used. Environment ----------- These variables obviate the need for command-line options in some circumstances, allowing easier restricted usage through git-shell. GIT\_CVSSERVER\_BASE\_PATH This variable replaces the argument to --base-path. GIT\_CVSSERVER\_ROOT This variable specifies a single directory, replacing the `<directory>...` argument list. The repository still requires the `gitcvs.enabled` config option, unless `--export-all` is specified. When these environment variables are set, the corresponding command-line arguments may not be used. Eclipse cvs client notes ------------------------ To get a checkout with the Eclipse CVS client: 1. Select "Create a new project → From CVS checkout" 2. Create a new location. See the notes below for details on how to choose the right protocol. 3. Browse the `modules` available. It will give you a list of the heads in the repository. You will not be able to browse the tree from there. Only the heads. 4. Pick `HEAD` when it asks what branch/tag to check out. Untick the "launch commit wizard" to avoid committing the .project file. Protocol notes: If you are using anonymous access via pserver, just select that. Those using SSH access should choose the `ext` protocol, and configure `ext` access on the Preferences→Team→CVS→ExtConnection pane. Set CVS\_SERVER to "`git cvsserver`". Note that password support is not good when using `ext`, you will definitely want to have SSH keys setup. Alternatively, you can just use the non-standard extssh protocol that Eclipse offer. In that case CVS\_SERVER is ignored, and you will have to replace the cvs utility on the server with `git-cvsserver` or manipulate your `.bashrc` so that calling `cvs` effectively calls `git-cvsserver`. Clients known to work --------------------- * CVS 1.12.9 on Debian * CVS 1.11.17 on MacOSX (from Fink package) * Eclipse 3.0, 3.1.2 on MacOSX (see Eclipse CVS Client Notes) * TortoiseCVS Operations supported -------------------- All the operations required for normal use are supported, including checkout, diff, status, update, log, add, remove, commit. Most CVS command arguments that read CVS tags or revision numbers (typically -r) work, and also support any git refspec (tag, branch, commit ID, etc). However, CVS revision numbers for non-default branches are not well emulated, and cvs log does not show tags or branches at all. (Non-main-branch CVS revision numbers superficially resemble CVS revision numbers, but they actually encode a git commit ID directly, rather than represent the number of revisions since the branch point.) Note that there are two ways to checkout a particular branch. As described elsewhere on this page, the "module" parameter of cvs checkout is interpreted as a branch name, and it becomes the main branch. It remains the main branch for a given sandbox even if you temporarily make another branch sticky with cvs update -r. Alternatively, the -r argument can indicate some other branch to actually checkout, even though the module is still the "main" branch. Tradeoffs (as currently implemented): Each new "module" creates a new database on disk with a history for the given module, and after the database is created, operations against that main branch are fast. Or alternatively, -r doesn’t take any extra disk space, but may be significantly slower for many operations, like cvs update. If you want to refer to a git refspec that has characters that are not allowed by CVS, you have two options. First, it may just work to supply the git refspec directly to the appropriate CVS -r argument; some CVS clients don’t seem to do much sanity checking of the argument. Second, if that fails, you can use a special character escape mechanism that only uses characters that are valid in CVS tags. A sequence of 4 or 5 characters of the form (underscore (`"_"`), dash (`"-"`), one or two characters, and dash (`"-"`)) can encode various characters based on the one or two letters: `"s"` for slash (`"/"`), `"p"` for period (`"."`), `"u"` for underscore (`"_"`), or two hexadecimal digits for any byte value at all (typically an ASCII number, or perhaps a part of a UTF-8 encoded character). Legacy monitoring operations are not supported (edit, watch and related). Exports and tagging (tags and branches) are not supported at this stage. ### CRLF Line Ending Conversions By default the server leaves the `-k` mode blank for all files, which causes the CVS client to treat them as a text files, subject to end-of-line conversion on some platforms. You can make the server use the end-of-line conversion attributes to set the `-k` modes for files by setting the `gitcvs.usecrlfattr` config variable. See [gitattributes[5]](gitattributes) for more information about end-of-line conversion. Alternatively, if `gitcvs.usecrlfattr` config is not enabled or the attributes do not allow automatic detection for a filename, then the server uses the `gitcvs.allBinary` config for the default setting. If `gitcvs.allBinary` is set, then file not otherwise specified will default to `-kb` mode. Otherwise the `-k` mode is left blank. But if `gitcvs.allBinary` is set to "guess", then the correct `-k` mode will be guessed based on the contents of the file. For best consistency with `cvs`, it is probably best to override the defaults by setting `gitcvs.usecrlfattr` to true, and `gitcvs.allBinary` to "guess". Dependencies ------------ `git-cvsserver` depends on DBD::SQLite. git git-check-ref-format git-check-ref-format ==================== Name ---- git-check-ref-format - Ensures that a reference name is well formed Synopsis -------- ``` git check-ref-format [--normalize] [--[no-]allow-onelevel] [--refspec-pattern] <refname> git check-ref-format --branch <branchname-shorthand> ``` Description ----------- Checks if a given `refname` is acceptable, and exits with a non-zero status if it is not. A reference is used in Git to specify branches and tags. A branch head is stored in the `refs/heads` hierarchy, while a tag is stored in the `refs/tags` hierarchy of the ref namespace (typically in `$GIT_DIR/refs/heads` and `$GIT_DIR/refs/tags` directories or, as entries in file `$GIT_DIR/packed-refs` if refs are packed by `git gc`). Git imposes the following rules on how references are named: 1. They can include slash `/` for hierarchical (directory) grouping, but no slash-separated component can begin with a dot `.` or end with the sequence `.lock`. 2. They must contain at least one `/`. This enforces the presence of a category like `heads/`, `tags/` etc. but the actual names are not restricted. If the `--allow-onelevel` option is used, this rule is waived. 3. They cannot have two consecutive dots `..` anywhere. 4. They cannot have ASCII control characters (i.e. bytes whose values are lower than \040, or \177 `DEL`), space, tilde `~`, caret `^`, or colon `:` anywhere. 5. They cannot have question-mark `?`, asterisk `*`, or open bracket `[` anywhere. See the `--refspec-pattern` option below for an exception to this rule. 6. They cannot begin or end with a slash `/` or contain multiple consecutive slashes (see the `--normalize` option below for an exception to this rule) 7. They cannot end with a dot `.`. 8. They cannot contain a sequence `@{`. 9. They cannot be the single character `@`. 10. They cannot contain a `\`. These rules make it easy for shell script based tools to parse reference names, pathname expansion by the shell when a reference name is used unquoted (by mistake), and also avoid ambiguities in certain reference name expressions (see [gitrevisions[7]](gitrevisions)): 1. A double-dot `..` is often used as in `ref1..ref2`, and in some contexts this notation means `^ref1 ref2` (i.e. not in `ref1` and in `ref2`). 2. A tilde `~` and caret `^` are used to introduce the postfix `nth parent` and `peel onion` operation. 3. A colon `:` is used as in `srcref:dstref` to mean "use srcref’s value and store it in dstref" in fetch and push operations. It may also be used to select a specific object such as with 'git cat-file': "git cat-file blob v1.3.3:refs.c". 4. at-open-brace `@{` is used as a notation to access a reflog entry. With the `--branch` option, the command takes a name and checks if it can be used as a valid branch name (e.g. when creating a new branch). But be cautious when using the previous checkout syntax that may refer to a detached HEAD state. The rule `git check-ref-format --branch $name` implements may be stricter than what `git check-ref-format refs/heads/$name` says (e.g. a dash may appear at the beginning of a ref component, but it is explicitly forbidden at the beginning of a branch name). When run with `--branch` option in a repository, the input is first expanded for the “previous checkout syntax” `@{-n}`. For example, `@{-1}` is a way to refer the last thing that was checked out using "git switch" or "git checkout" operation. This option should be used by porcelains to accept this syntax anywhere a branch name is expected, so they can act as if you typed the branch name. As an exception note that, the “previous checkout operation” might result in a commit object name when the N-th last thing checked out was not a branch. Options ------- --[no-]allow-onelevel Controls whether one-level refnames are accepted (i.e., refnames that do not contain multiple `/`-separated components). The default is `--no-allow-onelevel`. --refspec-pattern Interpret <refname> as a reference name pattern for a refspec (as used with remote repositories). If this option is enabled, <refname> is allowed to contain a single `*` in the refspec (e.g., `foo/bar*/baz` or `foo/bar*baz/` but not `foo/bar*/baz*`). --normalize Normalize `refname` by removing any leading slash (`/`) characters and collapsing runs of adjacent slashes between name components into a single slash. If the normalized refname is valid then print it to standard output and exit with a status of 0, otherwise exit with a non-zero status. (`--print` is a deprecated way to spell `--normalize`.) Examples -------- * Print the name of the previous thing checked out: ``` $ git check-ref-format --branch @{-1} ``` * Determine the reference name to use for a new branch: ``` $ ref=$(git check-ref-format --normalize "refs/heads/$newbranch")|| { echo "we do not like '$newbranch' as a branch name." >&2 ; exit 1 ; } ```
programming_docs
git git-unpack-file git-unpack-file =============== Name ---- git-unpack-file - Creates a temporary file with a blob’s contents Synopsis -------- ``` git unpack-file <blob> ``` Description ----------- Creates a file holding the contents of the blob specified by sha1. It returns the name of the temporary file in the following format: .merge\_file\_XXXXX Options ------- <blob> Must be a blob id git git-citool git-citool ========== Name ---- git-citool - Graphical alternative to git-commit Synopsis -------- ``` git citool ``` Description ----------- A Tcl/Tk based graphical interface to review modified files, stage them into the index, enter a commit message and record the new commit onto the current branch. This interface is an alternative to the less interactive `git commit` program. `git citool` is actually a standard alias for `git gui citool`. See [git-gui[1]](git-gui) for more details. git git-receive-pack git-receive-pack ================ Name ---- git-receive-pack - Receive what is pushed into the repository Synopsis -------- ``` git receive-pack <git-dir> ``` Description ----------- Invoked by `git send-pack` and updates the repository with the information fed from the remote end. This command is usually not invoked directly by the end user. The UI for the protocol is on the `git send-pack` side, and the program pair is meant to be used to push updates to remote repository. For pull operations, see [git-fetch-pack[1]](git-fetch-pack). The command allows for creation and fast-forwarding of sha1 refs (heads/tags) on the remote end (strictly speaking, it is the local end `git-receive-pack` runs, but to the user who is sitting at the send-pack end, it is updating the remote. Confused?) There are other real-world examples of using update and post-update hooks found in the Documentation/howto directory. `git-receive-pack` honours the receive.denyNonFastForwards config option, which tells it if updates to a ref should be denied if they are not fast-forwards. A number of other receive.\* config options are available to tweak its behavior, see [git-config[1]](git-config). Options ------- <git-dir> The repository to sync into. --http-backend-info-refs Used by [git-http-backend[1]](git-http-backend) to serve up `$GIT_URL/info/refs?service=git-receive-pack` requests. See `--http-backend-info-refs` in [git-upload-pack[1]](git-upload-pack). Pre-receive hook ---------------- Before any ref is updated, if $GIT\_DIR/hooks/pre-receive file exists and is executable, it will be invoked once with no parameters. The standard input of the hook will be one line per ref to be updated: ``` sha1-old SP sha1-new SP refname LF ``` The refname value is relative to $GIT\_DIR; e.g. for the master head this is "refs/heads/master". The two sha1 values before each refname are the object names for the refname before and after the update. Refs to be created will have sha1-old equal to 0{40}, while refs to be deleted will have sha1-new equal to 0{40}, otherwise sha1-old and sha1-new should be valid objects in the repository. When accepting a signed push (see [git-push[1]](git-push)), the signed push certificate is stored in a blob and an environment variable `GIT_PUSH_CERT` can be consulted for its object name. See the description of `post-receive` hook for an example. In addition, the certificate is verified using GPG and the result is exported with the following environment variables: `GIT_PUSH_CERT_SIGNER` The name and the e-mail address of the owner of the key that signed the push certificate. `GIT_PUSH_CERT_KEY` The GPG key ID of the key that signed the push certificate. `GIT_PUSH_CERT_STATUS` The status of GPG verification of the push certificate, using the same mnemonic as used in `%G?` format of `git log` family of commands (see [git-log[1]](git-log)). `GIT_PUSH_CERT_NONCE` The nonce string the process asked the signer to include in the push certificate. If this does not match the value recorded on the "nonce" header in the push certificate, it may indicate that the certificate is a valid one that is being replayed from a separate "git push" session. `GIT_PUSH_CERT_NONCE_STATUS` `UNSOLICITED` "git push --signed" sent a nonce when we did not ask it to send one. `MISSING` "git push --signed" did not send any nonce header. `BAD` "git push --signed" sent a bogus nonce. `OK` "git push --signed" sent the nonce we asked it to send. `SLOP` "git push --signed" sent a nonce different from what we asked it to send now, but in a previous session. See `GIT_PUSH_CERT_NONCE_SLOP` environment variable. `GIT_PUSH_CERT_NONCE_SLOP` "git push --signed" sent a nonce different from what we asked it to send now, but in a different session whose starting time is different by this many seconds from the current session. Only meaningful when `GIT_PUSH_CERT_NONCE_STATUS` says `SLOP`. Also read about `receive.certNonceSlop` variable in [git-config[1]](git-config). This hook is called before any refname is updated and before any fast-forward checks are performed. If the pre-receive hook exits with a non-zero exit status no updates will be performed, and the update, post-receive and post-update hooks will not be invoked either. This can be useful to quickly bail out if the update is not to be supported. See the notes on the quarantine environment below. Update hook ----------- Before each ref is updated, if $GIT\_DIR/hooks/update file exists and is executable, it is invoked once per ref, with three parameters: ``` $GIT_DIR/hooks/update refname sha1-old sha1-new ``` The refname parameter is relative to $GIT\_DIR; e.g. for the master head this is "refs/heads/master". The two sha1 arguments are the object names for the refname before and after the update. Note that the hook is called before the refname is updated, so either sha1-old is 0{40} (meaning there is no such ref yet), or it should match what is recorded in refname. The hook should exit with non-zero status if it wants to disallow updating the named ref. Otherwise it should exit with zero. Successful execution (a zero exit status) of this hook does not ensure the ref will actually be updated, it is only a prerequisite. As such it is not a good idea to send notices (e.g. email) from this hook. Consider using the post-receive hook instead. Post-receive hook ----------------- After all refs were updated (or attempted to be updated), if any ref update was successful, and if $GIT\_DIR/hooks/post-receive file exists and is executable, it will be invoked once with no parameters. The standard input of the hook will be one line for each successfully updated ref: ``` sha1-old SP sha1-new SP refname LF ``` The refname value is relative to $GIT\_DIR; e.g. for the master head this is "refs/heads/master". The two sha1 values before each refname are the object names for the refname before and after the update. Refs that were created will have sha1-old equal to 0{40}, while refs that were deleted will have sha1-new equal to 0{40}, otherwise sha1-old and sha1-new should be valid objects in the repository. The `GIT_PUSH_CERT*` environment variables can be inspected, just as in `pre-receive` hook, after accepting a signed push. Using this hook, it is easy to generate mails describing the updates to the repository. This example script sends one mail message per ref listing the commits pushed to the repository, and logs the push certificates of signed pushes with good signatures to a logger service: ``` #!/bin/sh # mail out commit update information. while read oval nval ref do if expr "$oval" : '0*$' >/dev/null then echo "Created a new ref, with the following commits:" git rev-list --pretty "$nval" else echo "New commits:" git rev-list --pretty "$nval" "^$oval" fi | mail -s "Changes to ref $ref" commit-list@mydomain done # log signed push certificate, if any if test -n "${GIT_PUSH_CERT-}" && test ${GIT_PUSH_CERT_STATUS} = G then ( echo expected nonce is ${GIT_PUSH_NONCE} git cat-file blob ${GIT_PUSH_CERT} ) | mail -s "push certificate from $GIT_PUSH_CERT_SIGNER" push-log@mydomain fi exit 0 ``` The exit code from this hook invocation is ignored, however a non-zero exit code will generate an error message. Note that it is possible for refname to not have sha1-new when this hook runs. This can easily occur if another user modifies the ref after it was updated by `git-receive-pack`, but before the hook was able to evaluate it. It is recommended that hooks rely on sha1-new rather than the current value of refname. Post-update hook ---------------- After all other processing, if at least one ref was updated, and if $GIT\_DIR/hooks/post-update file exists and is executable, then post-update will be called with the list of refs that have been updated. This can be used to implement any repository wide cleanup tasks. The exit code from this hook invocation is ignored; the only thing left for `git-receive-pack` to do at that point is to exit itself anyway. This hook can be used, for example, to run `git update-server-info` if the repository is packed and is served via a dumb transport. ``` #!/bin/sh exec git update-server-info ``` Quarantine environment ---------------------- When `receive-pack` takes in objects, they are placed into a temporary "quarantine" directory within the `$GIT_DIR/objects` directory and migrated into the main object store only after the `pre-receive` hook has completed. If the push fails before then, the temporary directory is removed entirely. This has a few user-visible effects and caveats: 1. Pushes which fail due to problems with the incoming pack, missing objects, or due to the `pre-receive` hook will not leave any on-disk data. This is usually helpful to prevent repeated failed pushes from filling up your disk, but can make debugging more challenging. 2. Any objects created by the `pre-receive` hook will be created in the quarantine directory (and migrated only if it succeeds). 3. The `pre-receive` hook MUST NOT update any refs to point to quarantined objects. Other programs accessing the repository will not be able to see the objects (and if the pre-receive hook fails, those refs would become corrupted). For safety, any ref updates from within `pre-receive` are automatically rejected. See also -------- [git-send-pack[1]](git-send-pack), [gitnamespaces[7]](gitnamespaces) git git-notes git-notes ========= Name ---- git-notes - Add or inspect object notes Synopsis -------- ``` git notes [list [<object>]] git notes add [-f] [--allow-empty] [-F <file> | -m <msg> | (-c | -C) <object>] [<object>] git notes copy [-f] ( --stdin | <from-object> [<to-object>] ) git notes append [--allow-empty] [-F <file> | -m <msg> | (-c | -C) <object>] [<object>] git notes edit [--allow-empty] [<object>] git notes show [<object>] git notes merge [-v | -q] [-s <strategy> ] <notes-ref> git notes merge --commit [-v | -q] git notes merge --abort [-v | -q] git notes remove [--ignore-missing] [--stdin] [<object>…​] git notes prune [-n] [-v] git notes get-ref ``` Description ----------- Adds, removes, or reads notes attached to objects, without touching the objects themselves. By default, notes are saved to and read from `refs/notes/commits`, but this default can be overridden. See the OPTIONS, CONFIGURATION, and ENVIRONMENT sections below. If this ref does not exist, it will be quietly created when it is first needed to store a note. A typical use of notes is to supplement a commit message without changing the commit itself. Notes can be shown by `git log` along with the original commit message. To distinguish these notes from the message stored in the commit object, the notes are indented like the message, after an unindented line saying "Notes (<refname>):" (or "Notes:" for `refs/notes/commits`). Notes can also be added to patches prepared with `git format-patch` by using the `--notes` option. Such notes are added as a patch commentary after a three dash separator line. To change which notes are shown by `git log`, see the "notes.displayRef" discussion in [CONFIGURATION](#CONFIGURATION). See the "notes.rewrite.<command>" configuration for a way to carry notes across commands that rewrite commits. Subcommands ----------- list List the notes object for a given object. If no object is given, show a list of all note objects and the objects they annotate (in the format "<note object> <annotated object>"). This is the default subcommand if no subcommand is given. add Add notes for a given object (defaults to HEAD). Abort if the object already has notes (use `-f` to overwrite existing notes). However, if you’re using `add` interactively (using an editor to supply the notes contents), then - instead of aborting - the existing notes will be opened in the editor (like the `edit` subcommand). copy Copy the notes for the first object onto the second object (defaults to HEAD). Abort if the second object already has notes, or if the first object has none (use -f to overwrite existing notes to the second object). This subcommand is equivalent to: `git notes add [-f] -C $(git notes list <from-object>) <to-object>` In `--stdin` mode, take lines in the format ``` <from-object> SP <to-object> [ SP <rest> ] LF ``` on standard input, and copy the notes from each <from-object> to its corresponding <to-object>. (The optional `<rest>` is ignored so that the command can read the input given to the `post-rewrite` hook.) append Append to the notes of an existing object (defaults to HEAD). Creates a new notes object if needed. edit Edit the notes for a given object (defaults to HEAD). show Show the notes for a given object (defaults to HEAD). merge Merge the given notes ref into the current notes ref. This will try to merge the changes made by the given notes ref (called "remote") since the merge-base (if any) into the current notes ref (called "local"). If conflicts arise and a strategy for automatically resolving conflicting notes (see the "NOTES MERGE STRATEGIES" section) is not given, the "manual" resolver is used. This resolver checks out the conflicting notes in a special worktree (`.git/NOTES_MERGE_WORKTREE`), and instructs the user to manually resolve the conflicts there. When done, the user can either finalize the merge with `git notes merge --commit`, or abort the merge with `git notes merge --abort`. remove Remove the notes for given objects (defaults to HEAD). When giving zero or one object from the command line, this is equivalent to specifying an empty note message to the `edit` subcommand. prune Remove all notes for non-existing/unreachable objects. get-ref Print the current notes ref. This provides an easy way to retrieve the current notes ref (e.g. from scripts). Options ------- -f --force When adding notes to an object that already has notes, overwrite the existing notes (instead of aborting). -m <msg> --message=<msg> Use the given note message (instead of prompting). If multiple `-m` options are given, their values are concatenated as separate paragraphs. Lines starting with `#` and empty lines other than a single line between paragraphs will be stripped out. -F <file> --file=<file> Take the note message from the given file. Use `-` to read the note message from the standard input. Lines starting with `#` and empty lines other than a single line between paragraphs will be stripped out. -C <object> --reuse-message=<object> Take the given blob object (for example, another note) as the note message. (Use `git notes copy <object>` instead to copy notes between objects.) -c <object> --reedit-message=<object> Like `-C`, but with `-c` the editor is invoked, so that the user can further edit the note message. --allow-empty Allow an empty note object to be stored. The default behavior is to automatically remove empty notes. --ref <ref> Manipulate the notes tree in <ref>. This overrides `GIT_NOTES_REF` and the "core.notesRef" configuration. The ref specifies the full refname when it begins with `refs/notes/`; when it begins with `notes/`, `refs/` and otherwise `refs/notes/` is prefixed to form a full name of the ref. --ignore-missing Do not consider it an error to request removing notes from an object that does not have notes attached to it. --stdin Also read the object names to remove notes from the standard input (there is no reason you cannot combine this with object names from the command line). -n --dry-run Do not remove anything; just report the object names whose notes would be removed. -s <strategy> --strategy=<strategy> When merging notes, resolve notes conflicts using the given strategy. The following strategies are recognized: "manual" (default), "ours", "theirs", "union" and "cat\_sort\_uniq". This option overrides the "notes.mergeStrategy" configuration setting. See the "NOTES MERGE STRATEGIES" section below for more information on each notes merge strategy. --commit Finalize an in-progress `git notes merge`. Use this option when you have resolved the conflicts that `git notes merge` stored in .git/NOTES\_MERGE\_WORKTREE. This amends the partial merge commit created by `git notes merge` (stored in .git/NOTES\_MERGE\_PARTIAL) by adding the notes in .git/NOTES\_MERGE\_WORKTREE. The notes ref stored in the .git/NOTES\_MERGE\_REF symref is updated to the resulting commit. --abort Abort/reset an in-progress `git notes merge`, i.e. a notes merge with conflicts. This simply removes all files related to the notes merge. -q --quiet When merging notes, operate quietly. -v --verbose When merging notes, be more verbose. When pruning notes, report all object names whose notes are removed. Discussion ---------- Commit notes are blobs containing extra information about an object (usually information to supplement a commit’s message). These blobs are taken from notes refs. A notes ref is usually a branch which contains "files" whose paths are the object names for the objects they describe, with some directory separators included for performance reasons [[1](#_footnotedef_1 "View footnote.")]. Every notes change creates a new commit at the specified notes ref. You can therefore inspect the history of the notes by invoking, e.g., `git log -p notes/commits`. Currently the commit message only records which operation triggered the update, and the commit authorship is determined according to the usual rules (see [git-commit[1]](git-commit)). These details may change in the future. It is also permitted for a notes ref to point directly to a tree object, in which case the history of the notes can be read with `git log -p -g <refname>`. Notes merge strategies ---------------------- The default notes merge strategy is "manual", which checks out conflicting notes in a special work tree for resolving notes conflicts (`.git/NOTES_MERGE_WORKTREE`), and instructs the user to resolve the conflicts in that work tree. When done, the user can either finalize the merge with `git notes merge --commit`, or abort the merge with `git notes merge --abort`. Users may select an automated merge strategy from among the following using either -s/--strategy option or configuring notes.mergeStrategy accordingly: "ours" automatically resolves conflicting notes in favor of the local version (i.e. the current notes ref). "theirs" automatically resolves notes conflicts in favor of the remote version (i.e. the given notes ref being merged into the current notes ref). "union" automatically resolves notes conflicts by concatenating the local and remote versions. "cat\_sort\_uniq" is similar to "union", but in addition to concatenating the local and remote versions, this strategy also sorts the resulting lines, and removes duplicate lines from the result. This is equivalent to applying the "cat | sort | uniq" shell pipeline to the local and remote versions. This strategy is useful if the notes follow a line-based format where one wants to avoid duplicated lines in the merge result. Note that if either the local or remote version contain duplicate lines prior to the merge, these will also be removed by this notes merge strategy. Examples -------- You can use notes to add annotations with information that was not available at the time a commit was written. ``` $ git notes add -m 'Tested-by: Johannes Sixt <[email protected]>' 72a144e2 $ git show -s 72a144e [...] Signed-off-by: Junio C Hamano <[email protected]> Notes: Tested-by: Johannes Sixt <[email protected]> ``` In principle, a note is a regular Git blob, and any kind of (non-)format is accepted. You can binary-safely create notes from arbitrary files using `git hash-object`: ``` $ cc *.c $ blob=$(git hash-object -w a.out) $ git notes --ref=built add --allow-empty -C "$blob" HEAD ``` (You cannot simply use `git notes --ref=built add -F a.out HEAD` because that is not binary-safe.) Of course, it doesn’t make much sense to display non-text-format notes with `git log`, so if you use such notes, you’ll probably need to write some special-purpose tools to do something useful with them. Configuration ------------- core.notesRef Notes ref to read and manipulate instead of `refs/notes/commits`. Must be an unabbreviated ref name. This setting can be overridden through the environment and command line. Everything above this line in this section isn’t included from the [git-config[1]](git-config) documentation. The content that follows is the same as what’s found there: notes.mergeStrategy Which merge strategy to choose by default when resolving notes conflicts. Must be one of `manual`, `ours`, `theirs`, `union`, or `cat_sort_uniq`. Defaults to `manual`. See "NOTES MERGE STRATEGIES" section of [git-notes[1]](git-notes) for more information on each strategy. This setting can be overridden by passing the `--strategy` option to [git-notes[1]](git-notes). notes.<name>.mergeStrategy Which merge strategy to choose when doing a notes merge into refs/notes/<name>. This overrides the more general "notes.mergeStrategy". See the "NOTES MERGE STRATEGIES" section in [git-notes[1]](git-notes) for more information on the available strategies. notes.displayRef Which ref (or refs, if a glob or specified more than once), in addition to the default set by `core.notesRef` or `GIT_NOTES_REF`, to read notes from when showing commit messages with the `git log` family of commands. This setting can be overridden with the `GIT_NOTES_DISPLAY_REF` environment variable, which must be a colon separated list of refs or globs. A warning will be issued for refs that do not exist, but a glob that does not match any refs is silently ignored. This setting can be disabled by the `--no-notes` option to the `git log` family of commands, or by the `--notes=<ref>` option accepted by those commands. The effective value of "core.notesRef" (possibly overridden by GIT\_NOTES\_REF) is also implicitly added to the list of refs to be displayed. notes.rewrite.<command> When rewriting commits with <command> (currently `amend` or `rebase`), if this variable is `false`, git will not copy notes from the original to the rewritten commit. Defaults to `true`. See also "`notes.rewriteRef`" below. This setting can be overridden with the `GIT_NOTES_REWRITE_REF` environment variable, which must be a colon separated list of refs or globs. notes.rewriteMode When copying notes during a rewrite (see the "notes.rewrite.<command>" option), determines what to do if the target commit already has a note. Must be one of `overwrite`, `concatenate`, `cat_sort_uniq`, or `ignore`. Defaults to `concatenate`. This setting can be overridden with the `GIT_NOTES_REWRITE_MODE` environment variable. notes.rewriteRef When copying notes during a rewrite, specifies the (fully qualified) ref whose notes should be copied. May be a glob, in which case notes in all matching refs will be copied. You may also specify this configuration several times. Does not have a default value; you must configure this variable to enable note rewriting. Set it to `refs/notes/commits` to enable rewriting for the default commit notes. Can be overridden with the `GIT_NOTES_REWRITE_REF` environment variable. See `notes.rewrite.<command>` above for a further description of its format. Environment ----------- `GIT_NOTES_REF` Which ref to manipulate notes from, instead of `refs/notes/commits`. This overrides the `core.notesRef` setting. `GIT_NOTES_DISPLAY_REF` Colon-delimited list of refs or globs indicating which refs, in addition to the default from `core.notesRef` or `GIT_NOTES_REF`, to read notes from when showing commit messages. This overrides the `notes.displayRef` setting. A warning will be issued for refs that do not exist, but a glob that does not match any refs is silently ignored. `GIT_NOTES_REWRITE_MODE` When copying notes during a rewrite, what to do if the target commit already has a note. Must be one of `overwrite`, `concatenate`, `cat_sort_uniq`, or `ignore`. This overrides the `core.rewriteMode` setting. `GIT_NOTES_REWRITE_REF` When rewriting commits, which notes to copy from the original to the rewritten commit. Must be a colon-delimited list of refs or globs. If not set in the environment, the list of notes to copy depends on the `notes.rewrite.<command>` and `notes.rewriteRef` settings. --- [1](#_footnoteref_1). Permitted pathnames have the form *bf*`/`*fe*`/`*30*`/`*…​*`/`*680d5a…​*: a sequence of directory names of two hexadecimal digits each followed by a filename with the rest of the object ID.
programming_docs
git git-fast-export git-fast-export =============== Name ---- git-fast-export - Git data exporter Synopsis -------- ``` git fast-export [<options>] | git fast-import ``` Description ----------- This program dumps the given revisions in a form suitable to be piped into `git fast-import`. You can use it as a human-readable bundle replacement (see [git-bundle[1]](git-bundle)), or as a format that can be edited before being fed to `git fast-import` in order to do history rewrites (an ability relied on by tools like `git filter-repo`). Options ------- --progress=<n> Insert `progress` statements every <n> objects, to be shown by `git fast-import` during import. --signed-tags=(verbatim|warn|warn-strip|strip|abort) Specify how to handle signed tags. Since any transformation after the export can change the tag names (which can also happen when excluding revisions) the signatures will not match. When asking to `abort` (which is the default), this program will die when encountering a signed tag. With `strip`, the tags will silently be made unsigned, with `warn-strip` they will be made unsigned but a warning will be displayed, with `verbatim`, they will be silently exported and with `warn`, they will be exported, but you will see a warning. --tag-of-filtered-object=(abort|drop|rewrite) Specify how to handle tags whose tagged object is filtered out. Since revisions and files to export can be limited by path, tagged objects may be filtered completely. When asking to `abort` (which is the default), this program will die when encountering such a tag. With `drop` it will omit such tags from the output. With `rewrite`, if the tagged object is a commit, it will rewrite the tag to tag an ancestor commit (via parent rewriting; see [git-rev-list[1]](git-rev-list)) -M -C Perform move and/or copy detection, as described in the [git-diff[1]](git-diff) manual page, and use it to generate rename and copy commands in the output dump. Note that earlier versions of this command did not complain and produced incorrect results if you gave these options. --export-marks=<file> Dumps the internal marks table to <file> when complete. Marks are written one per line as `:markid SHA-1`. Only marks for revisions are dumped; marks for blobs are ignored. Backends can use this file to validate imports after they have been completed, or to save the marks table across incremental runs. As <file> is only opened and truncated at completion, the same path can also be safely given to --import-marks. The file will not be written if no new object has been marked/exported. --import-marks=<file> Before processing any input, load the marks specified in <file>. The input file must exist, must be readable, and must use the same format as produced by --export-marks. --mark-tags In addition to labelling blobs and commits with mark ids, also label tags. This is useful in conjunction with `--export-marks` and `--import-marks`, and is also useful (and necessary) for exporting of nested tags. It does not hurt other cases and would be the default, but many fast-import frontends are not prepared to accept tags with mark identifiers. Any commits (or tags) that have already been marked will not be exported again. If the backend uses a similar --import-marks file, this allows for incremental bidirectional exporting of the repository by keeping the marks the same across runs. --fake-missing-tagger Some old repositories have tags without a tagger. The fast-import protocol was pretty strict about that, and did not allow that. So fake a tagger to be able to fast-import the output. --use-done-feature Start the stream with a `feature done` stanza, and terminate it with a `done` command. --no-data Skip output of blob objects and instead refer to blobs via their original SHA-1 hash. This is useful when rewriting the directory structure or history of a repository without touching the contents of individual files. Note that the resulting stream can only be used by a repository which already contains the necessary objects. --full-tree This option will cause fast-export to issue a "deleteall" directive for each commit followed by a full list of all files in the commit (as opposed to just listing the files which are different from the commit’s first parent). --anonymize Anonymize the contents of the repository while still retaining the shape of the history and stored tree. See the section on `ANONYMIZING` below. --anonymize-map=<from>[:<to>] Convert token `<from>` to `<to>` in the anonymized output. If `<to>` is omitted, map `<from>` to itself (i.e., do not anonymize it). See the section on `ANONYMIZING` below. --reference-excluded-parents By default, running a command such as `git fast-export master~5..master` will not include the commit master~5 and will make master~4 no longer have master~5 as a parent (though both the old master~4 and new master~4 will have all the same files). Use --reference-excluded-parents to instead have the stream refer to commits in the excluded range of history by their sha1sum. Note that the resulting stream can only be used by a repository which already contains the necessary parent commits. --show-original-ids Add an extra directive to the output for commits and blobs, `original-oid <SHA1SUM>`. While such directives will likely be ignored by importers such as git-fast-import, it may be useful for intermediary filters (e.g. for rewriting commit messages which refer to older commits, or for stripping blobs by id). --reencode=(yes|no|abort) Specify how to handle `encoding` header in commit objects. When asking to `abort` (which is the default), this program will die when encountering such a commit object. With `yes`, the commit message will be re-encoded into UTF-8. With `no`, the original encoding will be preserved. --refspec Apply the specified refspec to each ref exported. Multiple of them can be specified. [<git-rev-list-args>…​] A list of arguments, acceptable to `git rev-parse` and `git rev-list`, that specifies the specific objects and references to export. For example, `master~10..master` causes the current master reference to be exported along with all objects added since its 10th ancestor commit and (unless the --reference-excluded-parents option is specified) all files common to master~9 and master~10. Examples -------- ``` $ git fast-export --all | (cd /empty/repository && git fast-import) ``` This will export the whole repository and import it into the existing empty repository. Except for reencoding commits that are not in UTF-8, it would be a one-to-one mirror. ``` $ git fast-export master~5..master | sed "s|refs/heads/master|refs/heads/other|" | git fast-import ``` This makes a new branch called `other` from `master~5..master` (i.e. if `master` has linear history, it will take the last 5 commits). Note that this assumes that none of the blobs and commit messages referenced by that revision range contains the string `refs/heads/master`. Anonymizing ----------- If the `--anonymize` option is given, git will attempt to remove all identifying information from the repository while still retaining enough of the original tree and history patterns to reproduce some bugs. The goal is that a git bug which is found on a private repository will persist in the anonymized repository, and the latter can be shared with git developers to help solve the bug. With this option, git will replace all refnames, paths, blob contents, commit and tag messages, names, and email addresses in the output with anonymized data. Two instances of the same string will be replaced equivalently (e.g., two commits with the same author will have the same anonymized author in the output, but bear no resemblance to the original author string). The relationship between commits, branches, and tags is retained, as well as the commit timestamps (but the commit messages and refnames bear no resemblance to the originals). The relative makeup of the tree is retained (e.g., if you have a root tree with 10 files and 3 trees, so will the output), but their names and the contents of the files will be replaced. If you think you have found a git bug, you can start by exporting an anonymized stream of the whole repository: ``` $ git fast-export --anonymize --all >anon-stream ``` Then confirm that the bug persists in a repository created from that stream (many bugs will not, as they really do depend on the exact repository contents): ``` $ git init anon-repo $ cd anon-repo $ git fast-import <../anon-stream $ ... test your bug ... ``` If the anonymized repository shows the bug, it may be worth sharing `anon-stream` along with a regular bug report. Note that the anonymized stream compresses very well, so gzipping it is encouraged. If you want to examine the stream to see that it does not contain any private data, you can peruse it directly before sending. You may also want to try: ``` $ perl -pe 's/\d+/X/g' <anon-stream | sort -u | less ``` which shows all of the unique lines (with numbers converted to "X", to collapse "User 0", "User 1", etc into "User X"). This produces a much smaller output, and it is usually easy to quickly confirm that there is no private data in the stream. Reproducing some bugs may require referencing particular commits or paths, which becomes challenging after refnames and paths have been anonymized. You can ask for a particular token to be left as-is or mapped to a new value. For example, if you have a bug which reproduces with `git rev-list sensitive -- secret.c`, you can run: ``` $ git fast-export --anonymize --all \ --anonymize-map=sensitive:foo \ --anonymize-map=secret.c:bar.c \ >stream ``` After importing the stream, you can then run `git rev-list foo -- bar.c` in the anonymized repository. Note that paths and refnames are split into tokens at slash boundaries. The command above would anonymize `subdir/secret.c` as something like `path123/bar.c`; you could then search for `bar.c` in the anonymized repository to determine the final pathname. To make referencing the final pathname simpler, you can map each path component; so if you also anonymize `subdir` to `publicdir`, then the final pathname would be `publicdir/bar.c`. Limitations ----------- Since `git fast-import` cannot tag trees, you will not be able to export the linux.git repository completely, as it contains a tag referencing a tree instead of a commit. See also -------- [git-fast-import[1]](git-fast-import) git git-push git-push ======== Name ---- git-push - Update remote refs along with associated objects Synopsis -------- ``` git push [--all | --mirror | --tags] [--follow-tags] [--atomic] [-n | --dry-run] [--receive-pack=<git-receive-pack>] [--repo=<repository>] [-f | --force] [-d | --delete] [--prune] [-v | --verbose] [-u | --set-upstream] [-o <string> | --push-option=<string>] [--[no-]signed|--signed=(true|false|if-asked)] [--force-with-lease[=<refname>[:<expect>]] [--force-if-includes]] [--no-verify] [<repository> [<refspec>…​]] ``` Description ----------- Updates remote refs using local refs, while sending objects necessary to complete the given refs. You can make interesting things happen to a repository every time you push into it, by setting up `hooks` there. See documentation for [git-receive-pack[1]](git-receive-pack). When the command line does not specify where to push with the `<repository>` argument, `branch.*.remote` configuration for the current branch is consulted to determine where to push. If the configuration is missing, it defaults to `origin`. When the command line does not specify what to push with `<refspec>...` arguments or `--all`, `--mirror`, `--tags` options, the command finds the default `<refspec>` by consulting `remote.*.push` configuration, and if it is not found, honors `push.default` configuration to decide what to push (See [git-config[1]](git-config) for the meaning of `push.default`). When neither the command-line nor the configuration specify what to push, the default behavior is used, which corresponds to the `simple` value for `push.default`: the current branch is pushed to the corresponding upstream branch, but as a safety measure, the push is aborted if the upstream branch does not have the same name as the local one. Options ------- <repository> The "remote" repository that is destination of a push operation. This parameter can be either a URL (see the section [GIT URLS](#URLS) below) or the name of a remote (see the section [REMOTES](#REMOTES) below). <refspec>…​ Specify what destination ref to update with what source object. The format of a <refspec> parameter is an optional plus `+`, followed by the source object <src>, followed by a colon `:`, followed by the destination ref <dst>. The <src> is often the name of the branch you would want to push, but it can be any arbitrary "SHA-1 expression", such as `master~4` or `HEAD` (see [gitrevisions[7]](gitrevisions)). The <dst> tells which ref on the remote side is updated with this push. Arbitrary expressions cannot be used here, an actual ref must be named. If `git push [<repository>]` without any `<refspec>` argument is set to update some ref at the destination with `<src>` with `remote.<repository>.push` configuration variable, `:<dst>` part can be omitted—​such a push will update a ref that `<src>` normally updates without any `<refspec>` on the command line. Otherwise, missing `:<dst>` means to update the same ref as the `<src>`. If <dst> doesn’t start with `refs/` (e.g. `refs/heads/master`) we will try to infer where in `refs/*` on the destination <repository> it belongs based on the type of <src> being pushed and whether <dst> is ambiguous. * If <dst> unambiguously refers to a ref on the <repository> remote, then push to that ref. * If <src> resolves to a ref starting with refs/heads/ or refs/tags/, then prepend that to <dst>. * Other ambiguity resolutions might be added in the future, but for now any other cases will error out with an error indicating what we tried, and depending on the `advice.pushUnqualifiedRefname` configuration (see [git-config[1]](git-config)) suggest what refs/ namespace you may have wanted to push to. The object referenced by <src> is used to update the <dst> reference on the remote side. Whether this is allowed depends on where in `refs/*` the <dst> reference lives as described in detail below, in those sections "update" means any modifications except deletes, which as noted after the next few sections are treated differently. The `refs/heads/*` namespace will only accept commit objects, and updates only if they can be fast-forwarded. The `refs/tags/*` namespace will accept any kind of object (as commits, trees and blobs can be tagged), and any updates to them will be rejected. It’s possible to push any type of object to any namespace outside of `refs/{tags,heads}/*`. In the case of tags and commits, these will be treated as if they were the commits inside `refs/heads/*` for the purposes of whether the update is allowed. I.e. a fast-forward of commits and tags outside `refs/{tags,heads}/*` is allowed, even in cases where what’s being fast-forwarded is not a commit, but a tag object which happens to point to a new commit which is a fast-forward of the commit the last tag (or commit) it’s replacing. Replacing a tag with an entirely different tag is also allowed, if it points to the same commit, as well as pushing a peeled tag, i.e. pushing the commit that existing tag object points to, or a new tag object which an existing commit points to. Tree and blob objects outside of `refs/{tags,heads}/*` will be treated the same way as if they were inside `refs/tags/*`, any update of them will be rejected. All of the rules described above about what’s not allowed as an update can be overridden by adding an the optional leading `+` to a refspec (or using `--force` command line option). The only exception to this is that no amount of forcing will make the `refs/heads/*` namespace accept a non-commit object. Hooks and configuration can also override or amend these rules, see e.g. `receive.denyNonFastForwards` in [git-config[1]](git-config) and `pre-receive` and `update` in [githooks[5]](githooks). Pushing an empty <src> allows you to delete the <dst> ref from the remote repository. Deletions are always accepted without a leading `+` in the refspec (or `--force`), except when forbidden by configuration or hooks. See `receive.denyDeletes` in [git-config[1]](git-config) and `pre-receive` and `update` in [githooks[5]](githooks). The special refspec `:` (or `+:` to allow non-fast-forward updates) directs Git to push "matching" branches: for every branch that exists on the local side, the remote side is updated if a branch of the same name already exists on the remote side. `tag <tag>` means the same as `refs/tags/<tag>:refs/tags/<tag>`. --all Push all branches (i.e. refs under `refs/heads/`); cannot be used with other <refspec>. --prune Remove remote branches that don’t have a local counterpart. For example a remote branch `tmp` will be removed if a local branch with the same name doesn’t exist any more. This also respects refspecs, e.g. `git push --prune remote refs/heads/*:refs/tmp/*` would make sure that remote `refs/tmp/foo` will be removed if `refs/heads/foo` doesn’t exist. --mirror Instead of naming each ref to push, specifies that all refs under `refs/` (which includes but is not limited to `refs/heads/`, `refs/remotes/`, and `refs/tags/`) be mirrored to the remote repository. Newly created local refs will be pushed to the remote end, locally updated refs will be force updated on the remote end, and deleted refs will be removed from the remote end. This is the default if the configuration option `remote.<remote>.mirror` is set. -n --dry-run Do everything except actually send the updates. --porcelain Produce machine-readable output. The output status line for each ref will be tab-separated and sent to stdout instead of stderr. The full symbolic names of the refs will be given. -d --delete All listed refs are deleted from the remote repository. This is the same as prefixing all refs with a colon. --tags All refs under `refs/tags` are pushed, in addition to refspecs explicitly listed on the command line. --follow-tags Push all the refs that would be pushed without this option, and also push annotated tags in `refs/tags` that are missing from the remote but are pointing at commit-ish that are reachable from the refs being pushed. This can also be specified with configuration variable `push.followTags`. For more information, see `push.followTags` in [git-config[1]](git-config). --[no-]signed --signed=(true|false|if-asked) GPG-sign the push request to update refs on the receiving side, to allow it to be checked by the hooks and/or be logged. If `false` or `--no-signed`, no signing will be attempted. If `true` or `--signed`, the push will fail if the server does not support signed pushes. If set to `if-asked`, sign if and only if the server supports signed pushes. The push will also fail if the actual call to `gpg --sign` fails. See [git-receive-pack[1]](git-receive-pack) for the details on the receiving end. --[no-]atomic Use an atomic transaction on the remote side if available. Either all refs are updated, or on error, no refs are updated. If the server does not support atomic pushes the push will fail. -o <option> --push-option=<option> Transmit the given string to the server, which passes them to the pre-receive as well as the post-receive hook. The given string must not contain a NUL or LF character. When multiple `--push-option=<option>` are given, they are all sent to the other side in the order listed on the command line. When no `--push-option=<option>` is given from the command line, the values of configuration variable `push.pushOption` are used instead. --receive-pack=<git-receive-pack> --exec=<git-receive-pack> Path to the `git-receive-pack` program on the remote end. Sometimes useful when pushing to a remote repository over ssh, and you do not have the program in a directory on the default $PATH. --[no-]force-with-lease --force-with-lease=<refname> --force-with-lease=<refname>:<expect> Usually, "git push" refuses to update a remote ref that is not an ancestor of the local ref used to overwrite it. This option overrides this restriction if the current value of the remote ref is the expected value. "git push" fails otherwise. Imagine that you have to rebase what you have already published. You will have to bypass the "must fast-forward" rule in order to replace the history you originally published with the rebased history. If somebody else built on top of your original history while you are rebasing, the tip of the branch at the remote may advance with their commit, and blindly pushing with `--force` will lose their work. This option allows you to say that you expect the history you are updating is what you rebased and want to replace. If the remote ref still points at the commit you specified, you can be sure that no other people did anything to the ref. It is like taking a "lease" on the ref without explicitly locking it, and the remote ref is updated only if the "lease" is still valid. `--force-with-lease` alone, without specifying the details, will protect all remote refs that are going to be updated by requiring their current value to be the same as the remote-tracking branch we have for them. `--force-with-lease=<refname>`, without specifying the expected value, will protect the named ref (alone), if it is going to be updated, by requiring its current value to be the same as the remote-tracking branch we have for it. `--force-with-lease=<refname>:<expect>` will protect the named ref (alone), if it is going to be updated, by requiring its current value to be the same as the specified value `<expect>` (which is allowed to be different from the remote-tracking branch we have for the refname, or we do not even have to have such a remote-tracking branch when this form is used). If `<expect>` is the empty string, then the named ref must not already exist. Note that all forms other than `--force-with-lease=<refname>:<expect>` that specifies the expected current value of the ref explicitly are still experimental and their semantics may change as we gain experience with this feature. "--no-force-with-lease" will cancel all the previous --force-with-lease on the command line. A general note on safety: supplying this option without an expected value, i.e. as `--force-with-lease` or `--force-with-lease=<refname>` interacts very badly with anything that implicitly runs `git fetch` on the remote to be pushed to in the background, e.g. `git fetch origin` on your repository in a cronjob. The protection it offers over `--force` is ensuring that subsequent changes your work wasn’t based on aren’t clobbered, but this is trivially defeated if some background process is updating refs in the background. We don’t have anything except the remote tracking info to go by as a heuristic for refs you’re expected to have seen & are willing to clobber. If your editor or some other system is running `git fetch` in the background for you a way to mitigate this is to simply set up another remote: ``` git remote add origin-push $(git config remote.origin.url) git fetch origin-push ``` Now when the background process runs `git fetch origin` the references on `origin-push` won’t be updated, and thus commands like: ``` git push --force-with-lease origin-push ``` Will fail unless you manually run `git fetch origin-push`. This method is of course entirely defeated by something that runs `git fetch --all`, in that case you’d need to either disable it or do something more tedious like: ``` git fetch # update 'master' from remote git tag base master # mark our base point git rebase -i master # rewrite some commits git push --force-with-lease=master:base master:master ``` I.e. create a `base` tag for versions of the upstream code that you’ve seen and are willing to overwrite, then rewrite history, and finally force push changes to `master` if the remote version is still at `base`, regardless of what your local `remotes/origin/master` has been updated to in the background. Alternatively, specifying `--force-if-includes` as an ancillary option along with `--force-with-lease[=<refname>]` (i.e., without saying what exact commit the ref on the remote side must be pointing at, or which refs on the remote side are being protected) at the time of "push" will verify if updates from the remote-tracking refs that may have been implicitly updated in the background are integrated locally before allowing a forced update. -f --force Usually, the command refuses to update a remote ref that is not an ancestor of the local ref used to overwrite it. Also, when `--force-with-lease` option is used, the command refuses to update a remote ref whose current value does not match what is expected. This flag disables these checks, and can cause the remote repository to lose commits; use it with care. Note that `--force` applies to all the refs that are pushed, hence using it with `push.default` set to `matching` or with multiple push destinations configured with `remote.*.push` may overwrite refs other than the current branch (including local refs that are strictly behind their remote counterpart). To force a push to only one branch, use a `+` in front of the refspec to push (e.g `git push origin +master` to force a push to the `master` branch). See the `<refspec>...` section above for details. --[no-]force-if-includes Force an update only if the tip of the remote-tracking ref has been integrated locally. This option enables a check that verifies if the tip of the remote-tracking ref is reachable from one of the "reflog" entries of the local branch based in it for a rewrite. The check ensures that any updates from the remote have been incorporated locally by rejecting the forced update if that is not the case. If the option is passed without specifying `--force-with-lease`, or specified along with `--force-with-lease=<refname>:<expect>`, it is a "no-op". Specifying `--no-force-if-includes` disables this behavior. --repo=<repository> This option is equivalent to the <repository> argument. If both are specified, the command-line argument takes precedence. -u --set-upstream For every branch that is up to date or successfully pushed, add upstream (tracking) reference, used by argument-less [git-pull[1]](git-pull) and other commands. For more information, see `branch.<name>.merge` in [git-config[1]](git-config). --[no-]thin These options are passed to [git-send-pack[1]](git-send-pack). A thin transfer significantly reduces the amount of sent data when the sender and receiver share many of the same objects in common. The default is `--thin`. -q --quiet Suppress all output, including the listing of updated refs, unless an error occurs. Progress is not reported to the standard error stream. -v --verbose Run verbosely. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --no-recurse-submodules --recurse-submodules=check|on-demand|only|no May be used to make sure all submodule commits used by the revisions to be pushed are available on a remote-tracking branch. If `check` is used Git will verify that all submodule commits that changed in the revisions to be pushed are available on at least one remote of the submodule. If any commits are missing the push will be aborted and exit with non-zero status. If `on-demand` is used all submodules that changed in the revisions to be pushed will be pushed. If on-demand was not able to push all necessary revisions it will also be aborted and exit with non-zero status. If `only` is used all submodules will be pushed while the superproject is left unpushed. A value of `no` or using `--no-recurse-submodules` can be used to override the push.recurseSubmodules configuration variable when no submodule recursion is required. When using `on-demand` or `only`, if a submodule has a "push.recurseSubmodules={on-demand,only}" or "submodule.recurse" configuration, further recursion will occur. In this case, "only" is treated as "on-demand". --[no-]verify Toggle the pre-push hook (see [githooks[5]](githooks)). The default is --verify, giving the hook a chance to prevent the push. With --no-verify, the hook is bypassed completely. -4 --ipv4 Use IPv4 addresses only, ignoring IPv6 addresses. -6 --ipv6 Use IPv6 addresses only, ignoring IPv4 addresses. Git urls -------- In general, URLs contain information about the transport protocol, the address of the remote server, and the path to the repository. Depending on the transport protocol, some of this information may be absent. Git supports ssh, git, http, and https protocols (in addition, ftp, and ftps can be used for fetching, but this is inefficient and deprecated; do not use it). The native transport (i.e. git:// URL) does no authentication and should be used with caution on unsecured networks. The following syntaxes may be used with them: * ssh://[user@]host.xz[:port]/path/to/repo.git/ * git://host.xz[:port]/path/to/repo.git/ * http[s]://host.xz[:port]/path/to/repo.git/ * ftp[s]://host.xz[:port]/path/to/repo.git/ An alternative scp-like syntax may also be used with the ssh protocol: * [user@]host.xz:path/to/repo.git/ This syntax is only recognized if there are no slashes before the first colon. This helps differentiate a local path that contains a colon. For example the local path `foo:bar` could be specified as an absolute path or `./foo:bar` to avoid being misinterpreted as an ssh url. The ssh and git protocols additionally support ~username expansion: * ssh://[user@]host.xz[:port]/~[user]/path/to/repo.git/ * git://host.xz[:port]/~[user]/path/to/repo.git/ * [user@]host.xz:/~[user]/path/to/repo.git/ For local repositories, also supported by Git natively, the following syntaxes may be used: * /path/to/repo.git/ * file:///path/to/repo.git/ These two syntaxes are mostly equivalent, except when cloning, when the former implies --local option. See [git-clone[1]](git-clone) for details. `git clone`, `git fetch` and `git pull`, but not `git push`, will also accept a suitable bundle file. See [git-bundle[1]](git-bundle). When Git doesn’t know how to handle a certain transport protocol, it attempts to use the `remote-<transport>` remote helper, if one exists. To explicitly request a remote helper, the following syntax may be used: * <transport>::<address> where <address> may be a path, a server and path, or an arbitrary URL-like string recognized by the specific remote helper being invoked. See [gitremote-helpers[7]](gitremote-helpers) for details. If there are a large number of similarly-named remote repositories and you want to use a different format for them (such that the URLs you use will be rewritten into URLs that work), you can create a configuration section of the form: ``` [url "<actual url base>"] insteadOf = <other url base> ``` For example, with this: ``` [url "git://git.host.xz/"] insteadOf = host.xz:/path/to/ insteadOf = work: ``` a URL like "work:repo.git" or like "host.xz:/path/to/repo.git" will be rewritten in any context that takes a URL to be "git://git.host.xz/repo.git". If you want to rewrite URLs for push only, you can create a configuration section of the form: ``` [url "<actual url base>"] pushInsteadOf = <other url base> ``` For example, with this: ``` [url "ssh://example.org/"] pushInsteadOf = git://example.org/ ``` a URL like "git://example.org/path/to/repo.git" will be rewritten to "ssh://example.org/path/to/repo.git" for pushes, but pulls will still use the original URL. Remotes ------- The name of one of the following can be used instead of a URL as `<repository>` argument: * a remote in the Git configuration file: `$GIT_DIR/config`, * a file in the `$GIT_DIR/remotes` directory, or * a file in the `$GIT_DIR/branches` directory. All of these also allow you to omit the refspec from the command line because they each contain a refspec which git will use by default. ### Named remote in configuration file You can choose to provide the name of a remote which you had previously configured using [git-remote[1]](git-remote), [git-config[1]](git-config) or even by a manual edit to the `$GIT_DIR/config` file. The URL of this remote will be used to access the repository. The refspec of this remote will be used by default when you do not provide a refspec on the command line. The entry in the config file would appear like this: ``` [remote "<name>"] url = <URL> pushurl = <pushurl> push = <refspec> fetch = <refspec> ``` The `<pushurl>` is used for pushes only. It is optional and defaults to `<URL>`. ### Named file in `$GIT_DIR/remotes` You can choose to provide the name of a file in `$GIT_DIR/remotes`. The URL in this file will be used to access the repository. The refspec in this file will be used as default when you do not provide a refspec on the command line. This file should have the following format: ``` URL: one of the above URL format Push: <refspec> Pull: <refspec> ``` `Push:` lines are used by `git push` and `Pull:` lines are used by `git pull` and `git fetch`. Multiple `Push:` and `Pull:` lines may be specified for additional branch mappings. ### Named file in `$GIT_DIR/branches` You can choose to provide the name of a file in `$GIT_DIR/branches`. The URL in this file will be used to access the repository. This file should have the following format: ``` <URL>#<head> ``` `<URL>` is required; `#<head>` is optional. Depending on the operation, git will use one of the following refspecs, if you don’t provide one on the command line. `<branch>` is the name of this file in `$GIT_DIR/branches` and `<head>` defaults to `master`. git fetch uses: ``` refs/heads/<head>:refs/heads/<branch> ``` git push uses: ``` HEAD:refs/heads/<head> ``` Output ------ The output of "git push" depends on the transport method used; this section describes the output when pushing over the Git protocol (either locally or via ssh). The status of the push is output in tabular form, with each line representing the status of a single ref. Each line is of the form: ``` <flag> <summary> <from> -> <to> (<reason>) ``` If --porcelain is used, then each line of the output is of the form: ``` <flag> \t <from>:<to> \t <summary> (<reason>) ``` The status of up-to-date refs is shown only if --porcelain or --verbose option is used. flag A single character indicating the status of the ref: (space) for a successfully pushed fast-forward; `+` for a successful forced update; `-` for a successfully deleted ref; `*` for a successfully pushed new ref; `!` for a ref that was rejected or failed to push; and `=` for a ref that was up to date and did not need pushing. summary For a successfully pushed ref, the summary shows the old and new values of the ref in a form suitable for using as an argument to `git log` (this is `<old>..<new>` in most cases, and `<old>...<new>` for forced non-fast-forward updates). For a failed update, more details are given: rejected Git did not try to send the ref at all, typically because it is not a fast-forward and you did not force the update. remote rejected The remote end refused the update. Usually caused by a hook on the remote side, or because the remote repository has one of the following safety options in effect: `receive.denyCurrentBranch` (for pushes to the checked out branch), `receive.denyNonFastForwards` (for forced non-fast-forward updates), `receive.denyDeletes` or `receive.denyDeleteCurrent`. See [git-config[1]](git-config). remote failure The remote end did not report the successful update of the ref, perhaps because of a temporary error on the remote side, a break in the network connection, or other transient error. from The name of the local ref being pushed, minus its `refs/<type>/` prefix. In the case of deletion, the name of the local ref is omitted. to The name of the remote ref being updated, minus its `refs/<type>/` prefix. reason A human-readable explanation. In the case of successfully pushed refs, no explanation is needed. For a failed ref, the reason for failure is described. Note about fast-forwards ------------------------ When an update changes a branch (or more in general, a ref) that used to point at commit A to point at another commit B, it is called a fast-forward update if and only if B is a descendant of A. In a fast-forward update from A to B, the set of commits that the original commit A built on top of is a subset of the commits the new commit B builds on top of. Hence, it does not lose any history. In contrast, a non-fast-forward update will lose history. For example, suppose you and somebody else started at the same commit X, and you built a history leading to commit B while the other person built a history leading to commit A. The history looks like this: ``` B / ---X---A ``` Further suppose that the other person already pushed changes leading to A back to the original repository from which you two obtained the original commit X. The push done by the other person updated the branch that used to point at commit X to point at commit A. It is a fast-forward. But if you try to push, you will attempt to update the branch (that now points at A) with commit B. This does `not` fast-forward. If you did so, the changes introduced by commit A will be lost, because everybody will now start building on top of B. The command by default does not allow an update that is not a fast-forward to prevent such loss of history. If you do not want to lose your work (history from X to B) or the work by the other person (history from X to A), you would need to first fetch the history from the repository, create a history that contains changes done by both parties, and push the result back. You can perform "git pull", resolve potential conflicts, and "git push" the result. A "git pull" will create a merge commit C between commits A and B. ``` B---C / / ---X---A ``` Updating A with the resulting merge commit will fast-forward and your push will be accepted. Alternatively, you can rebase your change between X and B on top of A, with "git pull --rebase", and push the result back. The rebase will create a new commit D that builds the change between X and B on top of A. ``` B D / / ---X---A ``` Again, updating A with this commit will fast-forward and your push will be accepted. There is another common situation where you may encounter non-fast-forward rejection when you try to push, and it is possible even when you are pushing into a repository nobody else pushes into. After you push commit A yourself (in the first picture in this section), replace it with "git commit --amend" to produce commit B, and you try to push it out, because forgot that you have pushed A out already. In such a case, and only if you are certain that nobody in the meantime fetched your earlier commit A (and started building on top of it), you can run "git push --force" to overwrite it. In other words, "git push --force" is a method reserved for a case where you do mean to lose history. Examples -------- `git push` Works like `git push <remote>`, where <remote> is the current branch’s remote (or `origin`, if no remote is configured for the current branch). `git push origin` Without additional configuration, pushes the current branch to the configured upstream (`branch.<name>.merge` configuration variable) if it has the same name as the current branch, and errors out without pushing otherwise. The default behavior of this command when no <refspec> is given can be configured by setting the `push` option of the remote, or the `push.default` configuration variable. For example, to default to pushing only the current branch to `origin` use `git config remote.origin.push HEAD`. Any valid <refspec> (like the ones in the examples below) can be configured as the default for `git push origin`. `git push origin :` Push "matching" branches to `origin`. See <refspec> in the [OPTIONS](#OPTIONS) section above for a description of "matching" branches. `git push origin master` Find a ref that matches `master` in the source repository (most likely, it would find `refs/heads/master`), and update the same ref (e.g. `refs/heads/master`) in `origin` repository with it. If `master` did not exist remotely, it would be created. `git push origin HEAD` A handy way to push the current branch to the same name on the remote. `git push mothership master:satellite/master dev:satellite/dev` Use the source ref that matches `master` (e.g. `refs/heads/master`) to update the ref that matches `satellite/master` (most probably `refs/remotes/satellite/master`) in the `mothership` repository; do the same for `dev` and `satellite/dev`. See the section describing `<refspec>...` above for a discussion of the matching semantics. This is to emulate `git fetch` run on the `mothership` using `git push` that is run in the opposite direction in order to integrate the work done on `satellite`, and is often necessary when you can only make connection in one way (i.e. satellite can ssh into mothership but mothership cannot initiate connection to satellite because the latter is behind a firewall or does not run sshd). After running this `git push` on the `satellite` machine, you would ssh into the `mothership` and run `git merge` there to complete the emulation of `git pull` that were run on `mothership` to pull changes made on `satellite`. `git push origin HEAD:master` Push the current branch to the remote ref matching `master` in the `origin` repository. This form is convenient to push the current branch without thinking about its local name. `git push origin master:refs/heads/experimental` Create the branch `experimental` in the `origin` repository by copying the current `master` branch. This form is only needed to create a new branch or tag in the remote repository when the local name and the remote name are different; otherwise, the ref name on its own will work. `git push origin :experimental` Find a ref that matches `experimental` in the `origin` repository (e.g. `refs/heads/experimental`), and delete it. `git push origin +dev:master` Update the origin repository’s master branch with the dev branch, allowing non-fast-forward updates. **This can leave unreferenced commits dangling in the origin repository.** Consider the following situation, where a fast-forward is not possible: ``` o---o---o---A---B origin/master \ X---Y---Z dev ``` The above command would change the origin repository to ``` A---B (unnamed branch) / o---o---o---X---Y---Z master ``` Commits A and B would no longer belong to a branch with a symbolic name, and so would be unreachable. As such, these commits would be removed by a `git gc` command on the origin repository. Security -------- The fetch and push protocols are not designed to prevent one side from stealing data from the other repository that was not intended to be shared. If you have private data that you need to protect from a malicious peer, your best option is to store it in another repository. This applies to both clients and servers. In particular, namespaces on a server are not effective for read access control; you should only grant read access to a namespace to clients that you would trust with read access to the entire repository. The known attack vectors are as follows: 1. The victim sends "have" lines advertising the IDs of objects it has that are not explicitly intended to be shared but can be used to optimize the transfer if the peer also has them. The attacker chooses an object ID X to steal and sends a ref to X, but isn’t required to send the content of X because the victim already has it. Now the victim believes that the attacker has X, and it sends the content of X back to the attacker later. (This attack is most straightforward for a client to perform on a server, by creating a ref to X in the namespace the client has access to and then fetching it. The most likely way for a server to perform it on a client is to "merge" X into a public branch and hope that the user does additional work on this branch and pushes it back to the server without noticing the merge.) 2. As in #1, the attacker chooses an object ID X to steal. The victim sends an object Y that the attacker already has, and the attacker falsely claims to have X and not Y, so the victim sends Y as a delta against X. The delta reveals regions of X that are similar to Y to the attacker. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: push.autoSetupRemote If set to "true" assume `--set-upstream` on default push when no upstream tracking exists for the current branch; this option takes effect with push.default options `simple`, `upstream`, and `current`. It is useful if by default you want new branches to be pushed to the default remote (like the behavior of `push.default=current`) and you also want the upstream tracking to be set. Workflows most likely to benefit from this option are `simple` central workflows where all branches are expected to have the same name on the remote. push.default Defines the action `git push` should take if no refspec is given (whether from the command-line, config, or elsewhere). Different values are well-suited for specific workflows; for instance, in a purely central workflow (i.e. the fetch source is equal to the push destination), `upstream` is probably what you want. Possible values are: * `nothing` - do not push anything (error out) unless a refspec is given. This is primarily meant for people who want to avoid mistakes by always being explicit. * `current` - push the current branch to update a branch with the same name on the receiving end. Works in both central and non-central workflows. * `upstream` - push the current branch back to the branch whose changes are usually integrated into the current branch (which is called `@{upstream}`). This mode only makes sense if you are pushing to the same repository you would normally pull from (i.e. central workflow). * `tracking` - This is a deprecated synonym for `upstream`. * `simple` - pushes the current branch with the same name on the remote. If you are working on a centralized workflow (pushing to the same repository you pull from, which is typically `origin`), then you need to configure an upstream branch with the same name. This mode is the default since Git 2.0, and is the safest option suited for beginners. * `matching` - push all branches having the same name on both ends. This makes the repository you are pushing to remember the set of branches that will be pushed out (e.g. if you always push `maint` and `master` there and no other branches, the repository you push to will have these two branches, and your local `maint` and `master` will be pushed there). To use this mode effectively, you have to make sure `all` the branches you would push out are ready to be pushed out before running `git push`, as the whole point of this mode is to allow you to push all of the branches in one go. If you usually finish work on only one branch and push out the result, while other branches are unfinished, this mode is not for you. Also this mode is not suitable for pushing into a shared central repository, as other people may add new branches there, or update the tip of existing branches outside your control. This used to be the default, but not since Git 2.0 (`simple` is the new default). push.followTags If set to true enable `--follow-tags` option by default. You may override this configuration at time of push by specifying `--no-follow-tags`. push.gpgSign May be set to a boolean value, or the string `if-asked`. A true value causes all pushes to be GPG signed, as if `--signed` is passed to [git-push[1]](git-push). The string `if-asked` causes pushes to be signed if the server supports it, as if `--signed=if-asked` is passed to `git push`. A false value may override a value from a lower-priority config file. An explicit command-line flag always overrides this config option. push.pushOption When no `--push-option=<option>` argument is given from the command line, `git push` behaves as if each <value> of this variable is given as `--push-option=<value>`. This is a multi-valued variable, and an empty value can be used in a higher priority configuration file (e.g. `.git/config` in a repository) to clear the values inherited from a lower priority configuration files (e.g. `$HOME/.gitconfig`). ``` Example: /etc/gitconfig push.pushoption = a push.pushoption = b ~/.gitconfig push.pushoption = c repo/.git/config push.pushoption = push.pushoption = b This will result in only b (a and c are cleared). ``` push.recurseSubmodules May be "check", "on-demand", "only", or "no", with the same behavior as that of "push --recurse-submodules". If not set, `no` is used by default, unless `submodule.recurse` is set (in which case a `true` value means `on-demand`). push.useForceIfIncludes If set to "true", it is equivalent to specifying `--force-if-includes` as an option to [git-push[1]](git-push) in the command line. Adding `--no-force-if-includes` at the time of push overrides this configuration setting. push.negotiate If set to "true", attempt to reduce the size of the packfile sent by rounds of negotiation in which the client and the server attempt to find commits in common. If "false", Git will rely solely on the server’s ref advertisement to find commits in common. push.useBitmaps If set to "false", disable use of bitmaps for "git push" even if `pack.useBitmaps` is "true", without preventing other git operations from using bitmaps. Default is true.
programming_docs
git git-for-each-ref git-for-each-ref ================ Name ---- git-for-each-ref - Output information on each ref Synopsis -------- ``` git for-each-ref [--count=<count>] [--shell|--perl|--python|--tcl] [(--sort=<key>)…​] [--format=<format>] [<pattern>…​] [--points-at=<object>] [--merged[=<object>]] [--no-merged[=<object>]] [--contains[=<object>]] [--no-contains[=<object>]] ``` Description ----------- Iterate over all refs that match `<pattern>` and show them according to the given `<format>`, after sorting them according to the given set of `<key>`. If `<count>` is given, stop after showing that many refs. The interpolated values in `<format>` can optionally be quoted as string literals in the specified host language allowing their direct evaluation in that language. Options ------- <pattern>…​ If one or more patterns are given, only refs are shown that match against at least one pattern, either using fnmatch(3) or literally, in the latter case matching completely or from the beginning up to a slash. --count=<count> By default the command shows all refs that match `<pattern>`. This option makes it stop after showing that many refs. --sort=<key> A field name to sort on. Prefix `-` to sort in descending order of the value. When unspecified, `refname` is used. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key. --format=<format> A string that interpolates `%(fieldname)` from a ref being shown and the object it points at. If `fieldname` is prefixed with an asterisk (`*`) and the ref points at a tag object, use the value for the field in the object which the tag object refers to (instead of the field in the tag object). When unspecified, `<format>` defaults to `%(objectname) SPC %(objecttype) TAB %(refname)`. It also interpolates `%%` to `%`, and `%xx` where `xx` are hex digits interpolates to character with hex code `xx`; for example `%00` interpolates to `\0` (NUL), `%09` to `\t` (TAB) and `%0a` to `\n` (LF). --color[=<when>] Respect any colors specified in the `--format` option. The `<when>` field must be one of `always`, `never`, or `auto` (if `<when>` is absent, behave as if `always` was given). --shell --perl --python --tcl If given, strings that substitute `%(fieldname)` placeholders are quoted as string literals suitable for the specified host language. This is meant to produce a scriptlet that can directly be `eval`ed. --points-at=<object> Only list refs which points at the given object. --merged[=<object>] Only list refs whose tips are reachable from the specified commit (HEAD if not specified). --no-merged[=<object>] Only list refs whose tips are not reachable from the specified commit (HEAD if not specified). --contains[=<object>] Only list refs which contain the specified commit (HEAD if not specified). --no-contains[=<object>] Only list refs which don’t contain the specified commit (HEAD if not specified). --ignore-case Sorting and filtering refs are case insensitive. Field names ----------- Various values from structured fields in referenced objects can be used to interpolate into the resulting output, or as sort keys. For all objects, the following names can be used: refname The name of the ref (the part after $GIT\_DIR/). For a non-ambiguous short name of the ref append `:short`. The option core.warnAmbiguousRefs is used to select the strict abbreviation mode. If `lstrip=<N>` (`rstrip=<N>`) is appended, strips `<N>` slash-separated path components from the front (back) of the refname (e.g. `%(refname:lstrip=2)` turns `refs/tags/foo` into `foo` and `%(refname:rstrip=2)` turns `refs/tags/foo` into `refs`). If `<N>` is a negative number, strip as many path components as necessary from the specified end to leave `-<N>` path components (e.g. `%(refname:lstrip=-2)` turns `refs/tags/foo` into `tags/foo` and `%(refname:rstrip=-1)` turns `refs/tags/foo` into `refs`). When the ref does not have enough components, the result becomes an empty string if stripping with positive <N>, or it becomes the full refname if stripping with negative <N>. Neither is an error. `strip` can be used as a synonym to `lstrip`. objecttype The type of the object (`blob`, `tree`, `commit`, `tag`). objectsize The size of the object (the same as `git cat-file -s` reports). Append `:disk` to get the size, in bytes, that the object takes up on disk. See the note about on-disk sizes in the `CAVEATS` section below. objectname The object name (aka SHA-1). For a non-ambiguous abbreviation of the object name append `:short`. For an abbreviation of the object name with desired length append `:short=<length>`, where the minimum length is MINIMUM\_ABBREV. The length may be exceeded to ensure unique object names. deltabase This expands to the object name of the delta base for the given object, if it is stored as a delta. Otherwise it expands to the null object name (all zeroes). upstream The name of a local ref which can be considered “upstream” from the displayed ref. Respects `:short`, `:lstrip` and `:rstrip` in the same way as `refname` above. Additionally respects `:track` to show "[ahead N, behind M]" and `:trackshort` to show the terse version: ">" (ahead), "<" (behind), "<>" (ahead and behind), or "=" (in sync). `:track` also prints "[gone]" whenever unknown upstream ref is encountered. Append `:track,nobracket` to show tracking information without brackets (i.e "ahead N, behind M"). For any remote-tracking branch `%(upstream)`, `%(upstream:remotename)` and `%(upstream:remoteref)` refer to the name of the remote and the name of the tracked remote ref, respectively. In other words, the remote-tracking branch can be updated explicitly and individually by using the refspec `%(upstream:remoteref):%(upstream)` to fetch from `%(upstream:remotename)`. Has no effect if the ref does not have tracking information associated with it. All the options apart from `nobracket` are mutually exclusive, but if used together the last option is selected. push The name of a local ref which represents the `@{push}` location for the displayed ref. Respects `:short`, `:lstrip`, `:rstrip`, `:track`, `:trackshort`, `:remotename`, and `:remoteref` options as `upstream` does. Produces an empty string if no `@{push}` ref is configured. HEAD `*` if HEAD matches current ref (the checked out branch), ' ' otherwise. color Change output color. Followed by `:<colorname>`, where color names are described under Values in the "CONFIGURATION FILE" section of [git-config[1]](git-config). For example, `%(color:bold red)`. align Left-, middle-, or right-align the content between %(align:…​) and %(end). The "align:" is followed by `width=<width>` and `position=<position>` in any order separated by a comma, where the `<position>` is either left, right or middle, default being left and `<width>` is the total length of the content with alignment. For brevity, the "width=" and/or "position=" prefixes may be omitted, and bare <width> and <position> used instead. For instance, `%(align:<width>,<position>)`. If the contents length is more than the width then no alignment is performed. If used with `--quote` everything in between %(align:…​) and %(end) is quoted, but if nested then only the topmost level performs quoting. if Used as %(if)…​%(then)…​%(end) or %(if)…​%(then)…​%(else)…​%(end). If there is an atom with value or string literal after the %(if) then everything after the %(then) is printed, else if the %(else) atom is used, then everything after %(else) is printed. We ignore space when evaluating the string before %(then), this is useful when we use the %(HEAD) atom which prints either "\*" or " " and we want to apply the `if` condition only on the `HEAD` ref. Append ":equals=<string>" or ":notequals=<string>" to compare the value between the %(if:…​) and %(then) atoms with the given string. symref The ref which the given symbolic ref refers to. If not a symbolic ref, nothing is printed. Respects the `:short`, `:lstrip` and `:rstrip` options in the same way as `refname` above. worktreepath The absolute path to the worktree in which the ref is checked out, if it is checked out in any linked worktree. Empty string otherwise. In addition to the above, for commit and tag objects, the header field names (`tree`, `parent`, `object`, `type`, and `tag`) can be used to specify the value in the header field. Fields `tree` and `parent` can also be used with modifier `:short` and `:short=<length>` just like `objectname`. For commit and tag objects, the special `creatordate` and `creator` fields will correspond to the appropriate date or name-email-date tuple from the `committer` or `tagger` fields depending on the object type. These are intended for working on a mix of annotated and lightweight tags. Fields that have name-email-date tuple as its value (`author`, `committer`, and `tagger`) can be suffixed with `name`, `email`, and `date` to extract the named component. For email fields (`authoremail`, `committeremail` and `taggeremail`), `:trim` can be appended to get the email without angle brackets, and `:localpart` to get the part before the `@` symbol out of the trimmed email. The raw data in an object is `raw`. raw:size The raw data size of the object. Note that `--format=%(raw)` can not be used with `--python`, `--shell`, `--tcl`, because such language may not support arbitrary binary data in their string variable type. The message in a commit or a tag object is `contents`, from which `contents:<part>` can be used to extract various parts out of: contents:size The size in bytes of the commit or tag message. contents:subject The first paragraph of the message, which typically is a single line, is taken as the "subject" of the commit or the tag message. Instead of `contents:subject`, field `subject` can also be used to obtain same results. `:sanitize` can be appended to `subject` for subject line suitable for filename. contents:body The remainder of the commit or the tag message that follows the "subject". contents:signature The optional GPG signature of the tag. contents:lines=N The first `N` lines of the message. Additionally, the trailers as interpreted by [git-interpret-trailers[1]](git-interpret-trailers) are obtained as `trailers[:options]` (or by using the historical alias `contents:trailers[:options]`). For valid [:option] values see `trailers` section of [git-log[1]](git-log). For sorting purposes, fields with numeric values sort in numeric order (`objectsize`, `authordate`, `committerdate`, `creatordate`, `taggerdate`). All other fields are used to sort in their byte-value order. There is also an option to sort by versions, this can be done by using the fieldname `version:refname` or its alias `v:refname`. In any case, a field name that refers to a field inapplicable to the object referred by the ref does not cause an error. It returns an empty string instead. As a special case for the date-type fields, you may specify a format for the date by adding `:` followed by date format name (see the values the `--date` option to [git-rev-list[1]](git-rev-list) takes). Some atoms like %(align) and %(if) always require a matching %(end). We call them "opening atoms" and sometimes denote them as %($open). When a scripting language specific quoting is in effect, everything between a top-level opening atom and its matching %(end) is evaluated according to the semantics of the opening atom and only its result from the top-level is quoted. Examples -------- An example directly producing formatted text. Show the most recent 3 tagged commits: ``` #!/bin/sh git for-each-ref --count=3 --sort='-*authordate' \ --format='From: %(*authorname) %(*authoremail) Subject: %(*subject) Date: %(*authordate) Ref: %(*refname) %(*body) ' 'refs/tags' ``` A simple example showing the use of shell eval on the output, demonstrating the use of --shell. List the prefixes of all heads: ``` #!/bin/sh git for-each-ref --shell --format="ref=%(refname)" refs/heads | \ while read entry do eval "$entry" echo `dirname $ref` done ``` A bit more elaborate report on tags, demonstrating that the format may be an entire script: ``` #!/bin/sh fmt=' r=%(refname) t=%(*objecttype) T=${r#refs/tags/} o=%(*objectname) n=%(*authorname) e=%(*authoremail) s=%(*subject) d=%(*authordate) b=%(*body) kind=Tag if test "z$t" = z then # could be a lightweight tag t=%(objecttype) kind="Lightweight tag" o=%(objectname) n=%(authorname) e=%(authoremail) s=%(subject) d=%(authordate) b=%(body) fi echo "$kind $T points at a $t object $o" if test "z$t" = zcommit then echo "The commit was authored by $n $e at $d, and titled $s Its message reads as: " echo "$b" | sed -e "s/^/ /" echo fi ' eval=`git for-each-ref --shell --format="$fmt" \ --sort='*objecttype' \ --sort=-taggerdate \ refs/tags` eval "$eval" ``` An example to show the usage of %(if)…​%(then)…​%(else)…​%(end). This prefixes the current branch with a star. ``` git for-each-ref --format="%(if)%(HEAD)%(then)* %(else) %(end)%(refname:short)" refs/heads/ ``` An example to show the usage of %(if)…​%(then)…​%(end). This prints the authorname, if present. ``` git for-each-ref --format="%(refname)%(if)%(authorname)%(then) Authored by: %(authorname)%(end)" ``` Caveats ------- Note that the sizes of objects on disk are reported accurately, but care should be taken in drawing conclusions about which refs or objects are responsible for disk usage. The size of a packed non-delta object may be much larger than the size of objects which delta against it, but the choice of which object is the base and which is the delta is arbitrary and is subject to change during a repack. Note also that multiple copies of an object may be present in the object database; in this case, it is undefined which copy’s size or delta base will be reported. Notes ----- When combining multiple `--contains` and `--no-contains` filters, only references that contain at least one of the `--contains` commits and contain none of the `--no-contains` commits are shown. When combining multiple `--merged` and `--no-merged` filters, only references that are reachable from at least one of the `--merged` commits and from none of the `--no-merged` commits are shown. See also -------- [git-show-ref[1]](git-show-ref) git git-mv git-mv ====== Name ---- git-mv - Move or rename a file, a directory, or a symlink Synopsis -------- ``` git mv [<options>] <source>…​ <destination> ``` Description ----------- Move or rename a file, directory or symlink. ``` git mv [-v] [-f] [-n] [-k] <source> <destination> git mv [-v] [-f] [-n] [-k] <source> ... <destination directory> ``` In the first form, it renames <source>, which must exist and be either a file, symlink or directory, to <destination>. In the second form, the last argument has to be an existing directory; the given sources will be moved into this directory. The index is updated after successful completion, but the change must still be committed. Options ------- -f --force Force renaming or moving of a file even if the <destination> exists. -k Skip move or rename actions which would lead to an error condition. An error happens when a source is neither existing nor controlled by Git, or when it would overwrite an existing file unless `-f` is given. -n --dry-run Do nothing; only show what would happen -v --verbose Report the names of files as they are moved. Submodules ---------- Moving a submodule using a gitfile (which means they were cloned with a Git version 1.7.8 or newer) will update the gitfile and core.worktree setting to make the submodule work in the new location. It also will attempt to update the submodule.<name>.path setting in the [gitmodules[5]](gitmodules) file and stage that file (unless -n is used). Bugs ---- Each time a superproject update moves a populated submodule (e.g. when switching between commits before and after the move) a stale submodule checkout will remain in the old location and an empty directory will appear in the new location. To populate the submodule again in the new location the user will have to run "git submodule update" afterwards. Removing the old directory is only safe when it uses a gitfile, as otherwise the history of the submodule will be deleted too. Both steps will be obsolete when recursive submodule update has been implemented. git git-commit-tree git-commit-tree =============== Name ---- git-commit-tree - Create a new commit object Synopsis -------- ``` git commit-tree <tree> [(-p <parent>)…​] git commit-tree [(-p <parent>)…​] [-S[<keyid>]] [(-m <message>)…​] [(-F <file>)…​] <tree> ``` Description ----------- This is usually not what an end user wants to run directly. See [git-commit[1]](git-commit) instead. Creates a new commit object based on the provided tree object and emits the new commit object id on stdout. The log message is read from the standard input, unless `-m` or `-F` options are given. The `-m` and `-F` options can be given any number of times, in any order. The commit log message will be composed in the order in which the options are given. A commit object may have any number of parents. With exactly one parent, it is an ordinary commit. Having more than one parent makes the commit a merge between several lines of history. Initial (root) commits have no parents. While a tree represents a particular directory state of a working directory, a commit represents that state in "time", and explains how to get there. Normally a commit would identify a new "HEAD" state, and while Git doesn’t care where you save the note about that state, in practice we tend to just write the result to the file that is pointed at by `.git/HEAD`, so that we can always see what the last committed state was. Options ------- <tree> An existing tree object. -p <parent> Each `-p` indicates the id of a parent commit object. -m <message> A paragraph in the commit log message. This can be given more than once and each <message> becomes its own paragraph. -F <file> Read the commit log message from the given file. Use `-` to read from the standard input. This can be given more than once and the content of each file becomes its own paragraph. -S[<keyid>] --gpg-sign[=<keyid>] --no-gpg-sign GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. `--no-gpg-sign` is useful to countermand a `--gpg-sign` option given earlier on the command line. Commit information ------------------ A commit encapsulates: * all parent object ids * author name, email and date * committer name and email and the commit time. A commit comment is read from stdin. If a changelog entry is not provided via "<" redirection, `git commit-tree` will just wait for one to be entered and terminated with ^D. Date formats ------------ The `GIT_AUTHOR_DATE` and `GIT_COMMITTER_DATE` environment variables support the following date formats: Git internal format It is `<unix-timestamp> <time-zone-offset>`, where `<unix-timestamp>` is the number of seconds since the UNIX epoch. `<time-zone-offset>` is a positive or negative offset from UTC. For example CET (which is 1 hour ahead of UTC) is `+0100`. RFC 2822 The standard email format as described by RFC 2822, for example `Thu, 07 Apr 2005 22:13:13 +0200`. ISO 8601 Time and date specified by the ISO 8601 standard, for example `2005-04-07T22:13:13`. The parser accepts a space instead of the `T` character as well. Fractional parts of a second will be ignored, for example `2005-04-07T22:13:13.019` will be treated as `2005-04-07T22:13:13`. | | | | --- | --- | | Note | In addition, the date part is accepted in the following formats: `YYYY.MM.DD`, `MM/DD/YYYY` and `DD.MM.YYYY`. | Discussion ---------- Git is to some extent character encoding agnostic. * The contents of the blob objects are uninterpreted sequences of bytes. There is no encoding translation at the core level. * Path names are encoded in UTF-8 normalization form C. This applies to tree objects, the index file, ref names, as well as path names in command line arguments, environment variables and config files (`.git/config` (see [git-config[1]](git-config)), [gitignore[5]](gitignore), [gitattributes[5]](gitattributes) and [gitmodules[5]](gitmodules)). Note that Git at the core level treats path names simply as sequences of non-NUL bytes, there are no path name encoding conversions (except on Mac and Windows). Therefore, using non-ASCII path names will mostly work even on platforms and file systems that use legacy extended ASCII encodings. However, repositories created on such systems will not work properly on UTF-8-based systems (e.g. Linux, Mac, Windows) and vice versa. Additionally, many Git-based tools simply assume path names to be UTF-8 and will fail to display other encodings correctly. * Commit log messages are typically encoded in UTF-8, but other extended ASCII encodings are also supported. This includes ISO-8859-x, CP125x and many others, but `not` UTF-16/32, EBCDIC and CJK multi-byte encodings (GBK, Shift-JIS, Big5, EUC-x, CP9xx etc.). Although we encourage that the commit log messages are encoded in UTF-8, both the core and Git Porcelain are designed not to force UTF-8 on projects. If all participants of a particular project find it more convenient to use legacy encodings, Git does not forbid it. However, there are a few things to keep in mind. 1. `git commit` and `git commit-tree` issues a warning if the commit log message given to it does not look like a valid UTF-8 string, unless you explicitly say your project uses a legacy encoding. The way to say this is to have `i18n.commitEncoding` in `.git/config` file, like this: ``` [i18n] commitEncoding = ISO-8859-1 ``` Commit objects created with the above setting record the value of `i18n.commitEncoding` in its `encoding` header. This is to help other people who look at them later. Lack of this header implies that the commit log message is encoded in UTF-8. 2. `git log`, `git show`, `git blame` and friends look at the `encoding` header of a commit object, and try to re-code the log message into UTF-8 unless otherwise specified. You can specify the desired output encoding with `i18n.logOutputEncoding` in `.git/config` file, like this: ``` [i18n] logOutputEncoding = ISO-8859-1 ``` If you do not have this configuration variable, the value of `i18n.commitEncoding` is used instead. Note that we deliberately chose not to re-code the commit log message when a commit is made to force UTF-8 at the commit object level, because re-coding to UTF-8 is not necessarily a reversible operation. Files ----- /etc/mailname See also -------- [git-write-tree[1]](git-write-tree) [git-commit[1]](git-commit)
programming_docs
git git-sh-setup git-sh-setup ============ Name ---- git-sh-setup - Common Git shell script setup code Synopsis -------- ``` . "$(git --exec-path)/git-sh-setup" ``` Description ----------- This is not a command the end user would want to run. Ever. This documentation is meant for people who are studying the Porcelain-ish scripts and/or are writing new ones. The `git sh-setup` scriptlet is designed to be sourced (using `.`) by other shell scripts to set up some variables pointing at the normal Git directories and a few helper shell functions. Before sourcing it, your script should set up a few variables; `USAGE` (and `LONG_USAGE`, if any) is used to define message given by `usage()` shell function. `SUBDIRECTORY_OK` can be set if the script can run from a subdirectory of the working tree (some commands do not). The scriptlet sets `GIT_DIR` and `GIT_OBJECT_DIRECTORY` shell variables, but does **not** export them to the environment. Functions --------- die exit after emitting the supplied error message to the standard error stream. usage die with the usage message. set\_reflog\_action Set `GIT_REFLOG_ACTION` environment to a given string (typically the name of the program) unless it is already set. Whenever the script runs a `git` command that updates refs, a reflog entry is created using the value of this string to leave the record of what command updated the ref. git\_editor runs an editor of user’s choice (GIT\_EDITOR, core.editor, VISUAL or EDITOR) on a given file, but error out if no editor is specified and the terminal is dumb. is\_bare\_repository outputs `true` or `false` to the standard output stream to indicate if the repository is a bare repository (i.e. without an associated working tree). cd\_to\_toplevel runs chdir to the toplevel of the working tree. require\_work\_tree checks if the current directory is within the working tree of the repository, and otherwise dies. require\_work\_tree\_exists checks if the working tree associated with the repository exists, and otherwise dies. Often done before calling cd\_to\_toplevel, which is impossible to do if there is no working tree. require\_clean\_work\_tree <action> [<hint>] checks that the working tree and index associated with the repository have no uncommitted changes to tracked files. Otherwise it emits an error message of the form `Cannot <action>: <reason>. <hint>`, and dies. Example: ``` require_clean_work_tree rebase "Please commit or stash them." ``` get\_author\_ident\_from\_commit outputs code for use with eval to set the GIT\_AUTHOR\_NAME, GIT\_AUTHOR\_EMAIL and GIT\_AUTHOR\_DATE variables for a given commit. create\_virtual\_base modifies the first file so only lines in common with the second file remain. If there is insufficient common material, then the first file is left empty. The result is suitable as a virtual base input for a 3-way merge. git gitworkflows gitworkflows ============ Name ---- gitworkflows - An overview of recommended workflows with Git Synopsis -------- ``` git * ``` Description ----------- This document attempts to write down and motivate some of the workflow elements used for `git.git` itself. Many ideas apply in general, though the full workflow is rarely required for smaller projects with fewer people involved. We formulate a set of `rules` for quick reference, while the prose tries to motivate each of them. Do not always take them literally; you should value good reasons for your actions higher than manpages such as this one. Separate changes ---------------- As a general rule, you should try to split your changes into small logical steps, and commit each of them. They should be consistent, working independently of any later commits, pass the test suite, etc. This makes the review process much easier, and the history much more useful for later inspection and analysis, for example with [git-blame[1]](git-blame) and [git-bisect[1]](git-bisect). To achieve this, try to split your work into small steps from the very beginning. It is always easier to squash a few commits together than to split one big commit into several. Don’t be afraid of making too small or imperfect steps along the way. You can always go back later and edit the commits with `git rebase --interactive` before you publish them. You can use `git stash push --keep-index` to run the test suite independent of other uncommitted changes; see the EXAMPLES section of [git-stash[1]](git-stash). Managing branches ----------------- There are two main tools that can be used to include changes from one branch on another: [git-merge[1]](git-merge) and [git-cherry-pick[1]](git-cherry-pick). Merges have many advantages, so we try to solve as many problems as possible with merges alone. Cherry-picking is still occasionally useful; see "Merging upwards" below for an example. Most importantly, merging works at the branch level, while cherry-picking works at the commit level. This means that a merge can carry over the changes from 1, 10, or 1000 commits with equal ease, which in turn means the workflow scales much better to a large number of contributors (and contributions). Merges are also easier to understand because a merge commit is a "promise" that all changes from all its parents are now included. There is a tradeoff of course: merges require a more careful branch management. The following subsections discuss the important points. ### Graduation As a given feature goes from experimental to stable, it also "graduates" between the corresponding branches of the software. `git.git` uses the following `integration branches`: * `maint` tracks the commits that should go into the next "maintenance release", i.e., update of the last released stable version; * `master` tracks the commits that should go into the next release; * `next` is intended as a testing branch for topics being tested for stability for master. There is a fourth official branch that is used slightly differently: * `seen` (patches seen by the maintainer) is an integration branch for things that are not quite ready for inclusion yet (see "Integration Branches" below). Each of the four branches is usually a direct descendant of the one above it. Conceptually, the feature enters at an unstable branch (usually `next` or `seen`), and "graduates" to `master` for the next release once it is considered stable enough. ### Merging upwards The "downwards graduation" discussed above cannot be done by actually merging downwards, however, since that would merge `all` changes on the unstable branch into the stable one. Hence the following: Rule: Merge upwards Always commit your fixes to the oldest supported branch that requires them. Then (periodically) merge the integration branches upwards into each other. This gives a very controlled flow of fixes. If you notice that you have applied a fix to e.g. `master` that is also required in `maint`, you will need to cherry-pick it (using [git-cherry-pick[1]](git-cherry-pick)) downwards. This will happen a few times and is nothing to worry about unless you do it very frequently. ### Topic branches Any nontrivial feature will require several patches to implement, and may get extra bugfixes or improvements during its lifetime. Committing everything directly on the integration branches leads to many problems: Bad commits cannot be undone, so they must be reverted one by one, which creates confusing histories and further error potential when you forget to revert part of a group of changes. Working in parallel mixes up the changes, creating further confusion. Use of "topic branches" solves these problems. The name is pretty self explanatory, with a caveat that comes from the "merge upwards" rule above: Rule: Topic branches Make a side branch for every topic (feature, bugfix, …​). Fork it off at the oldest integration branch that you will eventually want to merge it into. Many things can then be done very naturally: * To get the feature/bugfix into an integration branch, simply merge it. If the topic has evolved further in the meantime, merge again. (Note that you do not necessarily have to merge it to the oldest integration branch first. For example, you can first merge a bugfix to `next`, give it some testing time, and merge to `maint` when you know it is stable.) * If you find you need new features from the branch `other` to continue working on your topic, merge `other` to `topic`. (However, do not do this "just habitually", see below.) * If you find you forked off the wrong branch and want to move it "back in time", use [git-rebase[1]](git-rebase). Note that the last point clashes with the other two: a topic that has been merged elsewhere should not be rebased. See the section on RECOVERING FROM UPSTREAM REBASE in [git-rebase[1]](git-rebase). We should point out that "habitually" (regularly for no real reason) merging an integration branch into your topics — and by extension, merging anything upstream into anything downstream on a regular basis — is frowned upon: Rule: Merge to downstream only at well-defined points Do not merge to downstream except with a good reason: upstream API changes affect your branch; your branch no longer merges to upstream cleanly; etc. Otherwise, the topic that was merged to suddenly contains more than a single (well-separated) change. The many resulting small merges will greatly clutter up history. Anyone who later investigates the history of a file will have to find out whether that merge affected the topic in development. An upstream might even inadvertently be merged into a "more stable" branch. And so on. ### Throw-away integration If you followed the last paragraph, you will now have many small topic branches, and occasionally wonder how they interact. Perhaps the result of merging them does not even work? But on the other hand, we want to avoid merging them anywhere "stable" because such merges cannot easily be undone. The solution, of course, is to make a merge that we can undo: merge into a throw-away branch. Rule: Throw-away integration branches To test the interaction of several topics, merge them into a throw-away branch. You must never base any work on such a branch! If you make it (very) clear that this branch is going to be deleted right after the testing, you can even publish this branch, for example to give the testers a chance to work with it, or other developers a chance to see if their in-progress work will be compatible. `git.git` has such an official throw-away integration branch called `seen`. ### Branch management for a release Assuming you are using the merge approach discussed above, when you are releasing your project you will need to do some additional branch management work. A feature release is created from the `master` branch, since `master` tracks the commits that should go into the next feature release. The `master` branch is supposed to be a superset of `maint`. If this condition does not hold, then `maint` contains some commits that are not included on `master`. The fixes represented by those commits will therefore not be included in your feature release. To verify that `master` is indeed a superset of `maint`, use git log: Recipe: Verify *master* is a superset of *maint* `git log master..maint` This command should not list any commits. Otherwise, check out `master` and merge `maint` into it. Now you can proceed with the creation of the feature release. Apply a tag to the tip of `master` indicating the release version: Recipe: Release tagging `git tag -s -m "Git X.Y.Z" vX.Y.Z master` You need to push the new tag to a public Git server (see "DISTRIBUTED WORKFLOWS" below). This makes the tag available to others tracking your project. The push could also trigger a post-update hook to perform release-related items such as building release tarballs and preformatted documentation pages. Similarly, for a maintenance release, `maint` is tracking the commits to be released. Therefore, in the steps above simply tag and push `maint` rather than `master`. ### Maintenance branch management after a feature release After a feature release, you need to manage your maintenance branches. First, if you wish to continue to release maintenance fixes for the feature release made before the recent one, then you must create another branch to track commits for that previous release. To do this, the current maintenance branch is copied to another branch named with the previous release version number (e.g. maint-X.Y.(Z-1) where X.Y.Z is the current release). Recipe: Copy maint `git branch maint-X.Y.(Z-1) maint` The `maint` branch should now be fast-forwarded to the newly released code so that maintenance fixes can be tracked for the current release: Recipe: Update maint to new release * `git checkout maint` * `git merge --ff-only master` If the merge fails because it is not a fast-forward, then it is possible some fixes on `maint` were missed in the feature release. This will not happen if the content of the branches was verified as described in the previous section. ### Branch management for next and seen after a feature release After a feature release, the integration branch `next` may optionally be rewound and rebuilt from the tip of `master` using the surviving topics on `next`: Recipe: Rewind and rebuild next * `git switch -C next master` * `git merge ai/topic_in_next1` * `git merge ai/topic_in_next2` * …​ The advantage of doing this is that the history of `next` will be clean. For example, some topics merged into `next` may have initially looked promising, but were later found to be undesirable or premature. In such a case, the topic is reverted out of `next` but the fact remains in the history that it was once merged and reverted. By recreating `next`, you give another incarnation of such topics a clean slate to retry, and a feature release is a good point in history to do so. If you do this, then you should make a public announcement indicating that `next` was rewound and rebuilt. The same rewind and rebuild process may be followed for `seen`. A public announcement is not necessary since `seen` is a throw-away branch, as described above. Distributed workflows --------------------- After the last section, you should know how to manage topics. In general, you will not be the only person working on the project, so you will have to share your work. Roughly speaking, there are two important workflows: merge and patch. The important difference is that the merge workflow can propagate full history, including merges, while patches cannot. Both workflows can be used in parallel: in `git.git`, only subsystem maintainers use the merge workflow, while everyone else sends patches. Note that the maintainer(s) may impose restrictions, such as "Signed-off-by" requirements, that all commits/patches submitted for inclusion must adhere to. Consult your project’s documentation for more information. ### Merge workflow The merge workflow works by copying branches between upstream and downstream. Upstream can merge contributions into the official history; downstream base their work on the official history. There are three main tools that can be used for this: * [git-push[1]](git-push) copies your branches to a remote repository, usually to one that can be read by all involved parties; * [git-fetch[1]](git-fetch) that copies remote branches to your repository; and * [git-pull[1]](git-pull) that does fetch and merge in one go. Note the last point. Do `not` use `git pull` unless you actually want to merge the remote branch. Getting changes out is easy: Recipe: Push/pull: Publishing branches/topics `git push <remote> <branch>` and tell everyone where they can fetch from. You will still have to tell people by other means, such as mail. (Git provides the [git-request-pull[1]](git-request-pull) to send preformatted pull requests to upstream maintainers to simplify this task.) If you just want to get the newest copies of the integration branches, staying up to date is easy too: Recipe: Push/pull: Staying up to date Use `git fetch <remote>` or `git remote update` to stay up to date. Then simply fork your topic branches from the stable remotes as explained earlier. If you are a maintainer and would like to merge other people’s topic branches to the integration branches, they will typically send a request to do so by mail. Such a request looks like ``` Please pull from <URL> <branch> ``` In that case, `git pull` can do the fetch and merge in one go, as follows. Recipe: Push/pull: Merging remote topics `git pull <URL> <branch>` Occasionally, the maintainer may get merge conflicts when they try to pull changes from downstream. In this case, they can ask downstream to do the merge and resolve the conflicts themselves (perhaps they will know better how to resolve them). It is one of the rare cases where downstream `should` merge from upstream. ### Patch workflow If you are a contributor that sends changes upstream in the form of emails, you should use topic branches as usual (see above). Then use [git-format-patch[1]](git-format-patch) to generate the corresponding emails (highly recommended over manually formatting them because it makes the maintainer’s life easier). Recipe: format-patch/am: Publishing branches/topics * `git format-patch -M upstream..topic` to turn them into preformatted patch files * `git send-email --to=<recipient> <patches>` See the [git-format-patch[1]](git-format-patch) and [git-send-email[1]](git-send-email) manpages for further usage notes. If the maintainer tells you that your patch no longer applies to the current upstream, you will have to rebase your topic (you cannot use a merge because you cannot format-patch merges): Recipe: format-patch/am: Keeping topics up to date `git pull --rebase <URL> <branch>` You can then fix the conflicts during the rebase. Presumably you have not published your topic other than by mail, so rebasing it is not a problem. If you receive such a patch series (as maintainer, or perhaps as a reader of the mailing list it was sent to), save the mails to files, create a new topic branch and use `git am` to import the commits: Recipe: format-patch/am: Importing patches `git am < patch` One feature worth pointing out is the three-way merge, which can help if you get conflicts: `git am -3` will use index information contained in patches to figure out the merge base. See [git-am[1]](git-am) for other options. See also -------- [gittutorial[7]](gittutorial), [git-push[1]](git-push), [git-pull[1]](git-pull), [git-merge[1]](git-merge), [git-rebase[1]](git-rebase), [git-format-patch[1]](git-format-patch), [git-send-email[1]](git-send-email), [git-am[1]](git-am) git git-verify-commit git-verify-commit ================= Name ---- git-verify-commit - Check the GPG signature of commits Synopsis -------- ``` git verify-commit [-v | --verbose] [--raw] <commit>…​ ``` Description ----------- Validates the GPG signature created by `git commit -S`. Options ------- --raw Print the raw gpg status output to standard error instead of the normal human-readable output. -v --verbose Print the contents of the commit object before validating it. <commit>…​ SHA-1 identifiers of Git commit objects. git git-replace git-replace =========== Name ---- git-replace - Create, list, delete refs to replace objects Synopsis -------- ``` git replace [-f] <object> <replacement> git replace [-f] --edit <object> git replace [-f] --graft <commit> [<parent>…​] git replace [-f] --convert-graft-file git replace -d <object>…​ git replace [--format=<format>] [-l [<pattern>]] ``` Description ----------- Adds a `replace` reference in `refs/replace/` namespace. The name of the `replace` reference is the SHA-1 of the object that is replaced. The content of the `replace` reference is the SHA-1 of the replacement object. The replaced object and the replacement object must be of the same type. This restriction can be bypassed using `-f`. Unless `-f` is given, the `replace` reference must not yet exist. There is no other restriction on the replaced and replacement objects. Merge commits can be replaced by non-merge commits and vice versa. Replacement references will be used by default by all Git commands except those doing reachability traversal (prune, pack transfer and fsck). It is possible to disable use of replacement references for any command using the `--no-replace-objects` option just after `git`. For example if commit `foo` has been replaced by commit `bar`: ``` $ git --no-replace-objects cat-file commit foo ``` shows information about commit `foo`, while: ``` $ git cat-file commit foo ``` shows information about commit `bar`. The `GIT_NO_REPLACE_OBJECTS` environment variable can be set to achieve the same effect as the `--no-replace-objects` option. Options ------- -f --force If an existing replace ref for the same object exists, it will be overwritten (instead of failing). -d --delete Delete existing replace refs for the given objects. --edit <object> Edit an object’s content interactively. The existing content for <object> is pretty-printed into a temporary file, an editor is launched on the file, and the result is parsed to create a new object of the same type as <object>. A replacement ref is then created to replace <object> with the newly created object. See [git-var[1]](git-var) for details about how the editor will be chosen. --raw When editing, provide the raw object contents rather than pretty-printed ones. Currently this only affects trees, which will be shown in their binary form. This is harder to work with, but can help when repairing a tree that is so corrupted it cannot be pretty-printed. Note that you may need to configure your editor to cleanly read and write binary data. --graft <commit> [<parent>…​] Create a graft commit. A new commit is created with the same content as <commit> except that its parents will be [<parent>…​] instead of <commit>'s parents. A replacement ref is then created to replace <commit> with the newly created commit. Use `--convert-graft-file` to convert a `$GIT_DIR/info/grafts` file and use replace refs instead. --convert-graft-file Creates graft commits for all entries in `$GIT_DIR/info/grafts` and deletes that file upon success. The purpose is to help users with transitioning off of the now-deprecated graft file. -l <pattern> --list <pattern> List replace refs for objects that match the given pattern (or all if no pattern is given). Typing "git replace" without arguments, also lists all replace refs. --format=<format> When listing, use the specified <format>, which can be one of `short`, `medium` and `long`. When omitted, the format defaults to `short`. Formats ------- The following format are available: * `short`: <replaced sha1> * `medium`: <replaced sha1> → <replacement sha1> * `long`: <replaced sha1> (<replaced type>) → <replacement sha1> (<replacement type>) Creating replacement objects ---------------------------- [git-hash-object[1]](git-hash-object), [git-rebase[1]](git-rebase), and [git-filter-repo](https://github.com/newren/git-filter-repo), among other git commands, can be used to create replacement objects from existing objects. The `--edit` option can also be used with `git replace` to create a replacement object by editing an existing object. If you want to replace many blobs, trees or commits that are part of a string of commits, you may just want to create a replacement string of commits and then only replace the commit at the tip of the target string of commits with the commit at the tip of the replacement string of commits. Bugs ---- Comparing blobs or trees that have been replaced with those that replace them will not work properly. And using `git reset --hard` to go back to a replaced commit will move the branch to the replacement commit instead of the replaced commit. There may be other problems when using `git rev-list` related to pending objects. See also -------- [git-hash-object[1]](git-hash-object) [git-rebase[1]](git-rebase) [git-tag[1]](git-tag) [git-branch[1]](git-branch) [git-commit[1]](git-commit) [git-var[1]](git-var) [git[1]](git) [git-filter-repo](https://github.com/newren/git-filter-repo)
programming_docs
git gitprotocol-v2 gitprotocol-v2 ============== Name ---- gitprotocol-v2 - Git Wire Protocol, Version 2 Synopsis -------- ``` <over-the-wire-protocol> ``` Description ----------- This document presents a specification for a version 2 of Git’s wire protocol. Protocol v2 will improve upon v1 in the following ways: * Instead of multiple service names, multiple commands will be supported by a single service * Easily extendable as capabilities are moved into their own section of the protocol, no longer being hidden behind a NUL byte and limited by the size of a pkt-line * Separate out other information hidden behind NUL bytes (e.g. agent string as a capability and symrefs can be requested using `ls-refs`) * Reference advertisement will be omitted unless explicitly requested * ls-refs command to explicitly request some refs * Designed with http and stateless-rpc in mind. With clear flush semantics the http remote helper can simply act as a proxy In protocol v2 communication is command oriented. When first contacting a server a list of capabilities will advertised. Some of these capabilities will be commands which a client can request be executed. Once a command has completed, a client can reuse the connection and request that other commands be executed. Packet-line framing ------------------- All communication is done using packet-line framing, just as in v1. See [gitprotocol-pack[5]](gitprotocol-pack) and [gitprotocol-common[5]](gitprotocol-common) for more information. In protocol v2 these special packets will have the following semantics: * `0000` Flush Packet (flush-pkt) - indicates the end of a message * `0001` Delimiter Packet (delim-pkt) - separates sections of a message * `0002` Response End Packet (response-end-pkt) - indicates the end of a response for stateless connections Initial client request ---------------------- In general a client can request to speak protocol v2 by sending `version=2` through the respective side-channel for the transport being used which inevitably sets `GIT_PROTOCOL`. More information can be found in [gitprotocol-pack[5]](gitprotocol-pack) and [gitprotocol-http[5]](gitprotocol-http), as well as the `GIT_PROTOCOL` definition in `git.txt`. In all cases the response from the server is the capability advertisement. ### Git Transport When using the git:// transport, you can request to use protocol v2 by sending "version=2" as an extra parameter: ``` 003egit-upload-pack /project.git\0host=myserver.com\0\0version=2\0 ``` ### SSH and File Transport When using either the ssh:// or file:// transport, the GIT\_PROTOCOL environment variable must be set explicitly to include "version=2". The server may need to be configured to allow this environment variable to pass. ### HTTP Transport When using the http:// or https:// transport a client makes a "smart" info/refs request as described in [gitprotocol-http[5]](gitprotocol-http) and requests that v2 be used by supplying "version=2" in the `Git-Protocol` header. ``` C: GET $GIT_URL/info/refs?service=git-upload-pack HTTP/1.0 C: Git-Protocol: version=2 ``` A v2 server would reply: ``` S: 200 OK S: <Some headers> S: ... S: S: 000eversion 2\n S: <capability-advertisement> ``` Subsequent requests are then made directly to the service `$GIT_URL/git-upload-pack`. (This works the same for git-receive-pack). Uses the `--http-backend-info-refs` option to [git-upload-pack[1]](git-upload-pack). The server may need to be configured to pass this header’s contents via the `GIT_PROTOCOL` variable. See the discussion in `git-http-backend.txt`. Capability advertisement ------------------------ A server which decides to communicate (based on a request from a client) using protocol version 2, notifies the client by sending a version string in its initial response followed by an advertisement of its capabilities. Each capability is a key with an optional value. Clients must ignore all unknown keys. Semantics of unknown values are left to the definition of each key. Some capabilities will describe commands which can be requested to be executed by the client. ``` capability-advertisement = protocol-version capability-list flush-pkt ``` ``` protocol-version = PKT-LINE("version 2" LF) capability-list = *capability capability = PKT-LINE(key[=value] LF) ``` ``` key = 1*(ALPHA | DIGIT | "-_") value = 1*(ALPHA | DIGIT | " -_.,?\/{}[]()<>!@#$%^&*+=:;") ``` Command request --------------- After receiving the capability advertisement, a client can then issue a request to select the command it wants with any particular capabilities or arguments. There is then an optional section where the client can provide any command specific parameters or queries. Only a single command can be requested at a time. ``` request = empty-request | command-request empty-request = flush-pkt command-request = command capability-list delim-pkt command-args flush-pkt command = PKT-LINE("command=" key LF) command-args = *command-specific-arg ``` ``` command-specific-args are packet line framed arguments defined by each individual command. ``` The server will then check to ensure that the client’s request is comprised of a valid command as well as valid capabilities which were advertised. If the request is valid the server will then execute the command. A server MUST wait till it has received the client’s entire request before issuing a response. The format of the response is determined by the command being executed, but in all cases a flush-pkt indicates the end of the response. When a command has finished, and the client has received the entire response from the server, a client can either request that another command be executed or can terminate the connection. A client may optionally send an empty request consisting of just a flush-pkt to indicate that no more requests will be made. Capabilities ------------ There are two different types of capabilities: normal capabilities, which can be used to convey information or alter the behavior of a request, and commands, which are the core actions that a client wants to perform (fetch, push, etc). Protocol version 2 is stateless by default. This means that all commands must only last a single round and be stateless from the perspective of the server side, unless the client has requested a capability indicating that state should be maintained by the server. Clients MUST NOT require state management on the server side in order to function correctly. This permits simple round-robin load-balancing on the server side, without needing to worry about state management. ### agent The server can advertise the `agent` capability with a value `X` (in the form `agent=X`) to notify the client that the server is running version `X`. The client may optionally send its own agent string by including the `agent` capability with a value `Y` (in the form `agent=Y`) in its request to the server (but it MUST NOT do so if the server did not advertise the agent capability). The `X` and `Y` strings may contain any printable ASCII characters except space (i.e., the byte range 32 < x < 127), and are typically of the form "package/version" (e.g., "git/1.8.3.1"). The agent strings are purely informative for statistics and debugging purposes, and MUST NOT be used to programmatically assume the presence or absence of particular features. ### ls-refs `ls-refs` is the command used to request a reference advertisement in v2. Unlike the current reference advertisement, ls-refs takes in arguments which can be used to limit the refs sent from the server. Additional features not supported in the base command will be advertised as the value of the command in the capability advertisement in the form of a space separated list of features: "<command>=<feature 1> <feature 2>" ls-refs takes in the following arguments: ``` symrefs In addition to the object pointed by it, show the underlying ref pointed by it when showing a symbolic ref. peel Show peeled tags. ref-prefix <prefix> When specified, only references having a prefix matching one of the provided prefixes are displayed. Multiple instances may be given, in which case references matching any prefix will be shown. Note that this is purely for optimization; a server MAY show refs not matching the prefix if it chooses, and clients should filter the result themselves. ``` If the `unborn` feature is advertised the following argument can be included in the client’s request. ``` unborn The server will send information about HEAD even if it is a symref pointing to an unborn branch in the form "unborn HEAD symref-target:<target>". ``` The output of ls-refs is as follows: ``` output = *ref flush-pkt obj-id-or-unborn = (obj-id | "unborn") ref = PKT-LINE(obj-id-or-unborn SP refname *(SP ref-attribute) LF) ref-attribute = (symref | peeled) symref = "symref-target:" symref-target peeled = "peeled:" obj-id ``` ### fetch `fetch` is the command used to fetch a packfile in v2. It can be looked at as a modified version of the v1 fetch where the ref-advertisement is stripped out (since the `ls-refs` command fills that role) and the message format is tweaked to eliminate redundancies and permit easy addition of future extensions. Additional features not supported in the base command will be advertised as the value of the command in the capability advertisement in the form of a space separated list of features: "<command>=<feature 1> <feature 2>" A `fetch` request can take the following arguments: ``` want <oid> Indicates to the server an object which the client wants to retrieve. Wants can be anything and are not limited to advertised objects. ``` ``` have <oid> Indicates to the server an object which the client has locally. This allows the server to make a packfile which only contains the objects that the client needs. Multiple 'have' lines can be supplied. ``` ``` done Indicates to the server that negotiation should terminate (or not even begin if performing a clone) and that the server should use the information supplied in the request to construct the packfile. ``` ``` thin-pack Request that a thin pack be sent, which is a pack with deltas which reference base objects not contained within the pack (but are known to exist at the receiving end). This can reduce the network traffic significantly, but it requires the receiving end to know how to "thicken" these packs by adding the missing bases to the pack. ``` ``` no-progress Request that progress information that would normally be sent on side-band channel 2, during the packfile transfer, should not be sent. However, the side-band channel 3 is still used for error responses. ``` ``` include-tag Request that annotated tags should be sent if the objects they point to are being sent. ``` ``` ofs-delta Indicate that the client understands PACKv2 with delta referring to its base by position in pack rather than by an oid. That is, they can read OBJ_OFS_DELTA (aka type 6) in a packfile. ``` If the `shallow` feature is advertised the following arguments can be included in the clients request as well as the potential addition of the `shallow-info` section in the server’s response as explained below. ``` shallow <oid> A client must notify the server of all commits for which it only has shallow copies (meaning that it doesn't have the parents of a commit) by supplying a 'shallow <oid>' line for each such object so that the server is aware of the limitations of the client's history. This is so that the server is aware that the client may not have all objects reachable from such commits. ``` ``` deepen <depth> Requests that the fetch/clone should be shallow having a commit depth of <depth> relative to the remote side. ``` ``` deepen-relative Requests that the semantics of the "deepen" command be changed to indicate that the depth requested is relative to the client's current shallow boundary, instead of relative to the requested commits. ``` ``` deepen-since <timestamp> Requests that the shallow clone/fetch should be cut at a specific time, instead of depth. Internally it's equivalent to doing "git rev-list --max-age=<timestamp>". Cannot be used with "deepen". ``` ``` deepen-not <rev> Requests that the shallow clone/fetch should be cut at a specific revision specified by '<rev>', instead of a depth. Internally it's equivalent of doing "git rev-list --not <rev>". Cannot be used with "deepen", but can be used with "deepen-since". ``` If the `filter` feature is advertised, the following argument can be included in the client’s request: ``` filter <filter-spec> Request that various objects from the packfile be omitted using one of several filtering techniques. These are intended for use with partial clone and partial fetch operations. See `rev-list` for possible "filter-spec" values. When communicating with other processes, senders SHOULD translate scaled integers (e.g. "1k") into a fully-expanded form (e.g. "1024") to aid interoperability with older receivers that may not understand newly-invented scaling suffixes. However, receivers SHOULD accept the following suffixes: 'k', 'm', and 'g' for 1024, 1048576, and 1073741824, respectively. ``` If the `ref-in-want` feature is advertised, the following argument can be included in the client’s request as well as the potential addition of the `wanted-refs` section in the server’s response as explained below. ``` want-ref <ref> Indicates to the server that the client wants to retrieve a particular ref, where <ref> is the full name of a ref on the server. ``` If the `sideband-all` feature is advertised, the following argument can be included in the client’s request: ``` sideband-all Instruct the server to send the whole response multiplexed, not just the packfile section. All non-flush and non-delim PKT-LINE in the response (not only in the packfile section) will then start with a byte indicating its sideband (1, 2, or 3), and the server may send "0005\2" (a PKT-LINE of sideband 2 with no payload) as a keepalive packet. ``` If the `packfile-uris` feature is advertised, the following argument can be included in the client’s request as well as the potential addition of the `packfile-uris` section in the server’s response as explained below. ``` packfile-uris <comma-separated list of protocols> Indicates to the server that the client is willing to receive URIs of any of the given protocols in place of objects in the sent packfile. Before performing the connectivity check, the client should download from all given URIs. Currently, the protocols supported are "http" and "https". ``` If the `wait-for-done` feature is advertised, the following argument can be included in the client’s request. ``` wait-for-done Indicates to the server that it should never send "ready", but should wait for the client to say "done" before sending the packfile. ``` The response of `fetch` is broken into a number of sections separated by delimiter packets (0001), with each section beginning with its section header. Most sections are sent only when the packfile is sent. ``` output = acknowledgements flush-pkt | [acknowledgments delim-pkt] [shallow-info delim-pkt] [wanted-refs delim-pkt] [packfile-uris delim-pkt] packfile flush-pkt ``` ``` acknowledgments = PKT-LINE("acknowledgments" LF) (nak | *ack) (ready) ready = PKT-LINE("ready" LF) nak = PKT-LINE("NAK" LF) ack = PKT-LINE("ACK" SP obj-id LF) ``` ``` shallow-info = PKT-LINE("shallow-info" LF) *PKT-LINE((shallow | unshallow) LF) shallow = "shallow" SP obj-id unshallow = "unshallow" SP obj-id ``` ``` wanted-refs = PKT-LINE("wanted-refs" LF) *PKT-LINE(wanted-ref LF) wanted-ref = obj-id SP refname ``` ``` packfile-uris = PKT-LINE("packfile-uris" LF) *packfile-uri packfile-uri = PKT-LINE(40*(HEXDIGIT) SP *%x20-ff LF) ``` ``` packfile = PKT-LINE("packfile" LF) *PKT-LINE(%x01-03 *%x00-ff) ``` ``` acknowledgments section * If the client determines that it is finished with negotiations by sending a "done" line (thus requiring the server to send a packfile), the acknowledgments sections MUST be omitted from the server's response. ``` * Always begins with the section header "acknowledgments" * The server will respond with "NAK" if none of the object ids sent as have lines were common. * The server will respond with "ACK obj-id" for all of the object ids sent as have lines which are common. * A response cannot have both "ACK" lines as well as a "NAK" line. * The server will respond with a "ready" line indicating that the server has found an acceptable common base and is ready to make and send a packfile (which will be found in the packfile section of the same response) * If the server has found a suitable cut point and has decided to send a "ready" line, then the server can decide to (as an optimization) omit any "ACK" lines it would have sent during its response. This is because the server will have already determined the objects it plans to send to the client and no further negotiation is needed. ``` shallow-info section * If the client has requested a shallow fetch/clone, a shallow client requests a fetch or the server is shallow then the server's response may include a shallow-info section. The shallow-info section will be included if (due to one of the above conditions) the server needs to inform the client of any shallow boundaries or adjustments to the clients already existing shallow boundaries. ``` * Always begins with the section header "shallow-info" * If a positive depth is requested, the server will compute the set of commits which are no deeper than the desired depth. * The server sends a "shallow obj-id" line for each commit whose parents will not be sent in the following packfile. * The server sends an "unshallow obj-id" line for each commit which the client has indicated is shallow, but is no longer shallow as a result of the fetch (due to its parents being sent in the following packfile). * The server MUST NOT send any "unshallow" lines for anything which the client has not indicated was shallow as a part of its request. ``` wanted-refs section * This section is only included if the client has requested a ref using a 'want-ref' line and if a packfile section is also included in the response. ``` * Always begins with the section header "wanted-refs". * The server will send a ref listing ("<oid> <refname>") for each reference requested using `want-ref` lines. * The server MUST NOT send any refs which were not requested using `want-ref` lines. ``` packfile-uris section * This section is only included if the client sent 'packfile-uris' and the server has at least one such URI to send. ``` * Always begins with the section header "packfile-uris". * For each URI the server sends, it sends a hash of the pack’s contents (as output by git index-pack) followed by the URI. * The hashes are 40 hex characters long. When Git upgrades to a new hash algorithm, this might need to be updated. (It should match whatever index-pack outputs after "pack\t" or "keep\t". ``` packfile section * This section is only included if the client has sent 'want' lines in its request and either requested that no more negotiation be done by sending 'done' or if the server has decided it has found a sufficient cut point to produce a packfile. ``` * Always begins with the section header "packfile" * The transmission of the packfile begins immediately after the section header * The data transfer of the packfile is always multiplexed, using the same semantics of the `side-band-64k` capability from protocol version 1. This means that each packet, during the packfile data stream, is made up of a leading 4-byte pkt-line length (typical of the pkt-line format), followed by a 1-byte stream code, followed by the actual data. ``` The stream code can be one of: 1 - pack data 2 - progress messages 3 - fatal error message just before stream aborts ``` ### server-option If advertised, indicates that any number of server specific options can be included in a request. This is done by sending each option as a "server-option=<option>" capability line in the capability-list section of a request. The provided options must not contain a NUL or LF character. ### object-format The server can advertise the `object-format` capability with a value `X` (in the form `object-format=X`) to notify the client that the server is able to deal with objects using hash algorithm X. If not specified, the server is assumed to only handle SHA-1. If the client would like to use a hash algorithm other than SHA-1, it should specify its object-format string. ### session-id=<session id> The server may advertise a session ID that can be used to identify this process across multiple requests. The client may advertise its own session ID back to the server as well. Session IDs should be unique to a given process. They must fit within a packet-line, and must not contain non-printable or whitespace characters. The current implementation uses trace2 session IDs (see <api-trace2> for details), but this may change and users of the session ID should not rely on this fact. ### object-info `object-info` is the command to retrieve information about one or more objects. Its main purpose is to allow a client to make decisions based on this information without having to fully fetch objects. Object size is the only information that is currently supported. An `object-info` request takes the following arguments: ``` size Requests size information to be returned for each listed object id. ``` ``` oid <oid> Indicates to the server an object which the client wants to obtain information for. ``` The response of `object-info` is a list of the requested object ids and associated requested information, each separated by a single space. ``` output = info flush-pkt ``` ``` info = PKT-LINE(attrs) LF) *PKT-LINE(obj-info LF) ``` ``` attrs = attr | attrs SP attrs ``` ``` attr = "size" ``` ``` obj-info = obj-id SP obj-size ```
programming_docs
git git-unpack-objects git-unpack-objects ================== Name ---- git-unpack-objects - Unpack objects from a packed archive Synopsis -------- ``` git unpack-objects [-n] [-q] [-r] [--strict] ``` Description ----------- Read a packed archive (.pack) from the standard input, expanding the objects contained within and writing them into the repository in "loose" (one object per file) format. Objects that already exist in the repository will **not** be unpacked from the packfile. Therefore, nothing will be unpacked if you use this command on a packfile that exists within the target repository. See [git-repack[1]](git-repack) for options to generate new packs and replace existing ones. Options ------- -n Dry run. Check the pack file without actually unpacking the objects. -q The command usually shows percentage progress. This flag suppresses it. -r When unpacking a corrupt packfile, the command dies at the first corruption. This flag tells it to keep going and make the best effort to recover as many objects as possible. --strict Don’t write objects with broken content or links. --max-input-size=<size> Die, if the pack is larger than <size>. git api-simple-ipc api-simple-ipc ============== The Simple-IPC API is a collection of `ipc_` prefixed library routines and a basic communication protocol that allow an IPC-client process to send an application-specific IPC-request message to an IPC-server process and receive an application-specific IPC-response message. Communication occurs over a named pipe on Windows and a Unix domain socket on other platforms. IPC-clients and IPC-servers rendezvous at a previously agreed-to application-specific pathname (which is outside the scope of this design) that is local to the computer system. The IPC-server routines within the server application process create a thread pool to listen for connections and receive request messages from multiple concurrent IPC-clients. When received, these messages are dispatched up to the server application callbacks for handling. IPC-server routines then incrementally relay responses back to the IPC-client. The IPC-client routines within a client application process connect to the IPC-server and send a request message and wait for a response. When received, the response is returned back the caller. For example, the `fsmonitor--daemon` feature will be built as a server application on top of the IPC-server library routines. It will have threads watching for file system events and a thread pool waiting for client connections. Clients, such as `git status` will request a list of file system events since a point in time and the server will respond with a list of changed files and directories. The formats of the request and response are application-specific; the IPC-client and IPC-server routines treat them as opaque byte streams. Comparison with sub-process model --------------------------------- The Simple-IPC mechanism differs from the existing `sub-process.c` model (Documentation/technical/long-running-process-protocol.txt) and used by applications like Git-LFS. In the LFS-style sub-process model the helper is started by the foreground process, communication happens via a pair of file descriptors bound to the stdin/stdout of the sub-process, the sub-process only serves the current foreground process, and the sub-process exits when the foreground process terminates. In the Simple-IPC model the server is a very long-running service. It can service many clients at the same time and has a private socket or named pipe connection to each active client. It might be started (on-demand) by the current client process or it might have been started by a previous client or by the OS at boot time. The server process is not associated with a terminal and it persists after clients terminate. Clients do not have access to the stdin/stdout of the server process and therefore must communicate over sockets or named pipes. Server startup and shutdown --------------------------- How an application server based upon IPC-server is started is also outside the scope of the Simple-IPC design and is a property of the application using it. For example, the server might be started or restarted during routine maintenance operations, or it might be started as a system service during the system boot-up sequence, or it might be started on-demand by a foreground Git command when needed. Similarly, server shutdown is a property of the application using the simple-ipc routines. For example, the server might decide to shutdown when idle or only upon explicit request. Simple-ipc protocol ------------------- The Simple-IPC protocol consists of a single request message from the client and an optional response message from the server. Both the client and server messages are unlimited in length and are terminated with a flush packet. The pkt-line routines ([gitprotocol-common[5]](gitprotocol-common)) are used to simplify buffer management during message generation, transmission, and reception. A flush packet is used to mark the end of the message. This allows the sender to incrementally generate and transmit the message. It allows the receiver to incrementally receive the message in chunks and to know when they have received the entire message. The actual byte format of the client request and server response messages are application specific. The IPC layer transmits and receives them as opaque byte buffers without any concern for the content within. It is the job of the calling application layer to understand the contents of the request and response messages. Summary ------- Conceptually, the Simple-IPC protocol is similar to an HTTP REST request. Clients connect, make an application-specific and stateless request, receive an application-specific response, and disconnect. It is a one round trip facility for querying the server. The Simple-IPC routines hide the socket, named pipe, and thread pool details and allow the application layer to focus on the application at hand. git gitformat-chunk gitformat-chunk =============== Name ---- gitformat-chunk - Chunk-based file formats Synopsis -------- Used by [gitformat-commit-graph[5]](gitformat-commit-graph) and the "MIDX" format (see the pack format documentation in [gitformat-pack[5]](gitformat-pack)). Description ----------- Some file formats in Git use a common concept of "chunks" to describe sections of the file. This allows structured access to a large file by scanning a small "table of contents" for the remaining data. This common format is used by the `commit-graph` and `multi-pack-index` files. See the `multi-pack-index` format in [gitformat-pack[5]](gitformat-pack) and the `commit-graph` format in [gitformat-commit-graph[5]](gitformat-commit-graph) for how they use the chunks to describe structured data. A chunk-based file format begins with some header information custom to that format. That header should include enough information to identify the file type, format version, and number of chunks in the file. From this information, that file can determine the start of the chunk-based region. The chunk-based region starts with a table of contents describing where each chunk starts and ends. This consists of (C+1) rows of 12 bytes each, where C is the number of chunks. Consider the following table: ``` | Chunk ID (4 bytes) | Chunk Offset (8 bytes) | |--------------------|------------------------| | ID[0] | OFFSET[0] | | ... | ... | | ID[C] | OFFSET[C] | | 0x0000 | OFFSET[C+1] | ``` Each row consists of a 4-byte chunk identifier (ID) and an 8-byte offset. Each integer is stored in network-byte order. The chunk identifier `ID[i]` is a label for the data stored within this fill from `OFFSET[i]` (inclusive) to `OFFSET[i+1]` (exclusive). Thus, the size of the `i`th chunk is equal to the difference between `OFFSET[i+1]` and `OFFSET[i]`. This requires that the chunk data appears contiguously in the same order as the table of contents. The final entry in the table of contents must be four zero bytes. This confirms that the table of contents is ending and provides the offset for the end of the chunk-based data. Note: The chunk-based format expects that the file contains `at least` a trailing hash after `OFFSET[C+1]`. Functions for working with chunk-based file formats are declared in `chunk-format.h`. Using these methods provide extra checks that assist developers when creating new file formats. Writing chunk-based file formats -------------------------------- To write a chunk-based file format, create a `struct chunkfile` by calling `init_chunkfile()` and pass a `struct hashfile` pointer. The caller is responsible for opening the `hashfile` and writing header information so the file format is identifiable before the chunk-based format begins. Then, call `add_chunk()` for each chunk that is intended for write. This populates the `chunkfile` with information about the order and size of each chunk to write. Provide a `chunk_write_fn` function pointer to perform the write of the chunk data upon request. Call `write_chunkfile()` to write the table of contents to the `hashfile` followed by each of the chunks. This will verify that each chunk wrote the expected amount of data so the table of contents is correct. Finally, call `free_chunkfile()` to clear the `struct chunkfile` data. The caller is responsible for finalizing the `hashfile` by writing the trailing hash and closing the file. Reading chunk-based file formats -------------------------------- To read a chunk-based file format, the file must be opened as a memory-mapped region. The chunk-format API expects that the entire file is mapped as a contiguous memory region. Initialize a `struct chunkfile` pointer with `init_chunkfile(NULL)`. After reading the header information from the beginning of the file, including the chunk count, call `read_table_of_contents()` to populate the `struct chunkfile` with the list of chunks, their offsets, and their sizes. Extract the data information for each chunk using `pair_chunk()` or `read_chunk()`: * `pair_chunk()` assigns a given pointer with the location inside the memory-mapped file corresponding to that chunk’s offset. If the chunk does not exist, then the pointer is not modified. * `read_chunk()` takes a `chunk_read_fn` function pointer and calls it with the appropriate initial pointer and size information. The function is not called if the chunk does not exist. Use this method to read chunks if you need to perform immediate parsing or if you need to execute logic based on the size of the chunk. After calling these methods, call `free_chunkfile()` to clear the `struct chunkfile` data. This will not close the memory-mapped region. Callers are expected to own that data for the timeframe the pointers into the region are needed. Examples -------- These file formats use the chunk-format API, and can be used as examples for future formats: * **commit-graph:** see `write_commit_graph_file()` and `parse_commit_graph()` in `commit-graph.c` for how the chunk-format API is used to write and parse the commit-graph file format documented in the commit-graph file format in [gitformat-commit-graph[5]](gitformat-commit-graph). * **multi-pack-index:** see `write_midx_internal()` and `load_multi_pack_index()` in `midx.c` for how the chunk-format API is used to write and parse the multi-pack-index file format documented in the multi-pack-index file format section of [gitformat-pack[5]](gitformat-pack). git git-write-tree git-write-tree ============== Name ---- git-write-tree - Create a tree object from the current index Synopsis -------- ``` git write-tree [--missing-ok] [--prefix=<prefix>/] ``` Description ----------- Creates a tree object using the current index. The name of the new tree object is printed to standard output. The index must be in a fully merged state. Conceptually, `git write-tree` sync()s the current index contents into a set of tree files. In order to have that match what is actually in your directory right now, you need to have done a `git update-index` phase before you did the `git write-tree`. Options ------- --missing-ok Normally `git write-tree` ensures that the objects referenced by the directory exist in the object database. This option disables this check. --prefix=<prefix>/ Writes a tree object that represents a subdirectory `<prefix>`. This can be used to write the tree object for a subproject that is in the named subdirectory. git gitnamespaces gitnamespaces ============= Name ---- gitnamespaces - Git namespaces Synopsis -------- ``` GIT_NAMESPACE=<namespace> git upload-pack GIT_NAMESPACE=<namespace> git receive-pack ``` Description ----------- Git supports dividing the refs of a single repository into multiple namespaces, each of which has its own branches, tags, and HEAD. Git can expose each namespace as an independent repository to pull from and push to, while sharing the object store, and exposing all the refs to operations such as [git-gc[1]](git-gc). Storing multiple repositories as namespaces of a single repository avoids storing duplicate copies of the same objects, such as when storing multiple branches of the same source. The alternates mechanism provides similar support for avoiding duplicates, but alternates do not prevent duplication between new objects added to the repositories without ongoing maintenance, while namespaces do. To specify a namespace, set the `GIT_NAMESPACE` environment variable to the namespace. For each ref namespace, Git stores the corresponding refs in a directory under `refs/namespaces/`. For example, `GIT_NAMESPACE=foo` will store refs under `refs/namespaces/foo/`. You can also specify namespaces via the `--namespace` option to [git[1]](git). Note that namespaces which include a `/` will expand to a hierarchy of namespaces; for example, `GIT_NAMESPACE=foo/bar` will store refs under `refs/namespaces/foo/refs/namespaces/bar/`. This makes paths in `GIT_NAMESPACE` behave hierarchically, so that cloning with `GIT_NAMESPACE=foo/bar` produces the same result as cloning with `GIT_NAMESPACE=foo` and cloning from that repo with `GIT_NAMESPACE=bar`. It also avoids ambiguity with strange namespace paths such as `foo/refs/heads/`, which could otherwise generate directory/file conflicts within the `refs` directory. [git-upload-pack[1]](git-upload-pack) and [git-receive-pack[1]](git-receive-pack) rewrite the names of refs as specified by `GIT_NAMESPACE`. git-upload-pack and git-receive-pack will ignore all references outside the specified namespace. The smart HTTP server, [git-http-backend[1]](git-http-backend), will pass GIT\_NAMESPACE through to the backend programs; see [git-http-backend[1]](git-http-backend) for sample configuration to expose repository namespaces as repositories. For a simple local test, you can use [git-remote-ext[1]](git-remote-ext): ``` git clone ext::'git --namespace=foo %s /tmp/prefixed.git' ``` Security -------- The fetch and push protocols are not designed to prevent one side from stealing data from the other repository that was not intended to be shared. If you have private data that you need to protect from a malicious peer, your best option is to store it in another repository. This applies to both clients and servers. In particular, namespaces on a server are not effective for read access control; you should only grant read access to a namespace to clients that you would trust with read access to the entire repository. The known attack vectors are as follows: 1. The victim sends "have" lines advertising the IDs of objects it has that are not explicitly intended to be shared but can be used to optimize the transfer if the peer also has them. The attacker chooses an object ID X to steal and sends a ref to X, but isn’t required to send the content of X because the victim already has it. Now the victim believes that the attacker has X, and it sends the content of X back to the attacker later. (This attack is most straightforward for a client to perform on a server, by creating a ref to X in the namespace the client has access to and then fetching it. The most likely way for a server to perform it on a client is to "merge" X into a public branch and hope that the user does additional work on this branch and pushes it back to the server without noticing the merge.) 2. As in #1, the attacker chooses an object ID X to steal. The victim sends an object Y that the attacker already has, and the attacker falsely claims to have X and not Y, so the victim sends Y as a delta against X. The delta reveals regions of X that are similar to Y to the attacker. git partial-clone partial-clone ============= The "Partial Clone" feature is a performance optimization for Git that allows Git to function without having a complete copy of the repository. The goal of this work is to allow Git better handle extremely large repositories. During clone and fetch operations, Git downloads the complete contents and history of the repository. This includes all commits, trees, and blobs for the complete life of the repository. For extremely large repositories, clones can take hours (or days) and consume 100+GiB of disk space. Often in these repositories there are many blobs and trees that the user does not need such as: 1. files outside of the user’s work area in the tree. For example, in a repository with 500K directories and 3.5M files in every commit, we can avoid downloading many objects if the user only needs a narrow "cone" of the source tree. 2. large binary assets. For example, in a repository where large build artifacts are checked into the tree, we can avoid downloading all previous versions of these non-mergeable binary assets and only download versions that are actually referenced. Partial clone allows us to avoid downloading such unneeded objects **in advance** during clone and fetch operations and thereby reduce download times and disk usage. Missing objects can later be "demand fetched" if/when needed. A remote that can later provide the missing objects is called a promisor remote, as it promises to send the objects when requested. Initially Git supported only one promisor remote, the origin remote from which the user cloned and that was configured in the "extensions.partialClone" config option. Later support for more than one promisor remote has been implemented. Use of partial clone requires that the user be online and the origin remote or other promisor remotes be available for on-demand fetching of missing objects. This may or may not be problematic for the user. For example, if the user can stay within the pre-selected subset of the source tree, they may not encounter any missing objects. Alternatively, the user could try to pre-fetch various objects if they know that they are going offline. Non-goals --------- Partial clone is a mechanism to limit the number of blobs and trees downloaded **within** a given range of commits — and is therefore independent of and not intended to conflict with existing DAG-level mechanisms to limit the set of requested commits (i.e. shallow clone, single branch, or fetch `<refspec>`). Design overview --------------- Partial clone logically consists of the following parts: * A mechanism for the client to describe unneeded or unwanted objects to the server. * A mechanism for the server to omit such unwanted objects from packfiles sent to the client. * A mechanism for the client to gracefully handle missing objects (that were previously omitted by the server). * A mechanism for the client to backfill missing objects as needed. Design details -------------- * A new pack-protocol capability "filter" is added to the fetch-pack and upload-pack negotiation. This uses the existing capability discovery mechanism. See "filter" in [gitprotocol-pack[5]](gitprotocol-pack). * Clients pass a "filter-spec" to clone and fetch which is passed to the server to request filtering during packfile construction. There are various filters available to accommodate different situations. See "--filter=<filter-spec>" in Documentation/rev-list-options.txt. * On the server pack-objects applies the requested filter-spec as it creates "filtered" packfiles for the client. These filtered packfiles are **incomplete** in the traditional sense because they may contain objects that reference objects not contained in the packfile and that the client doesn’t already have. For example, the filtered packfile may contain trees or tags that reference missing blobs or commits that reference missing trees. * On the client these incomplete packfiles are marked as "promisor packfiles" and treated differently by various commands. * On the client a repository extension is added to the local config to prevent older versions of git from failing mid-operation because of missing objects that they cannot handle. See "extensions.partialClone" in Documentation/technical/repository-version.txt" Handling missing objects ------------------------ * An object may be missing due to a partial clone or fetch, or missing due to repository corruption. To differentiate these cases, the local repository specially indicates such filtered packfiles obtained from promisor remotes as "promisor packfiles". These promisor packfiles consist of a "<name>.promisor" file with arbitrary contents (like the "<name>.keep" files), in addition to their "<name>.pack" and "<name>.idx" files. * The local repository considers a "promisor object" to be an object that it knows (to the best of its ability) that promisor remotes have promised that they have, either because the local repository has that object in one of its promisor packfiles, or because another promisor object refers to it. When Git encounters a missing object, Git can see if it is a promisor object and handle it appropriately. If not, Git can report a corruption. This means that there is no need for the client to explicitly maintain an expensive-to-modify list of missing objects.[a] * Since almost all Git code currently expects any referenced object to be present locally and because we do not want to force every command to do a dry-run first, a fallback mechanism is added to allow Git to attempt to dynamically fetch missing objects from promisor remotes. When the normal object lookup fails to find an object, Git invokes promisor\_remote\_get\_direct() to try to get the object from a promisor remote and then retry the object lookup. This allows objects to be "faulted in" without complicated prediction algorithms. For efficiency reasons, no check as to whether the missing object is actually a promisor object is performed. Dynamic object fetching tends to be slow as objects are fetched one at a time. * `checkout` (and any other command using `unpack-trees`) has been taught to bulk pre-fetch all required missing blobs in a single batch. * `rev-list` has been taught to print missing objects. This can be used by other commands to bulk prefetch objects. For example, a "git log -p A..B" may internally want to first do something like "git rev-list --objects --quiet --missing=print A..B" and prefetch those objects in bulk. * `fsck` has been updated to be fully aware of promisor objects. * `repack` in GC has been updated to not touch promisor packfiles at all, and to only repack other objects. * The global variable "fetch\_if\_missing" is used to control whether an object lookup will attempt to dynamically fetch a missing object or report an error. We are not happy with this global variable and would like to remove it, but that requires significant refactoring of the object code to pass an additional flag. Fetching missing objects ------------------------ * Fetching of objects is done by invoking a "git fetch" subprocess. * The local repository sends a request with the hashes of all requested objects, and does not perform any packfile negotiation. It then receives a packfile. * Because we are reusing the existing fetch mechanism, fetching currently fetches all objects referred to by the requested objects, even though they are not necessary. * Fetching with `--refetch` will request a complete new filtered packfile from the remote, which can be used to change a filter without needing to dynamically fetch missing objects. Using many promisor remotes --------------------------- Many promisor remotes can be configured and used. This allows for example a user to have multiple geographically-close cache servers for fetching missing blobs while continuing to do filtered `git-fetch` commands from the central server. When fetching objects, promisor remotes are tried one after the other until all the objects have been fetched. Remotes that are considered "promisor" remotes are those specified by the following configuration variables: * `extensions.partialClone = <name>` * `remote.<name>.promisor = true` * `remote.<name>.partialCloneFilter = ...` Only one promisor remote can be configured using the `extensions.partialClone` config variable. This promisor remote will be the last one tried when fetching objects. We decided to make it the last one we try, because it is likely that someone using many promisor remotes is doing so because the other promisor remotes are better for some reason (maybe they are closer or faster for some kind of objects) than the origin, and the origin is likely to be the remote specified by extensions.partialClone. This justification is not very strong, but one choice had to be made, and anyway the long term plan should be to make the order somehow fully configurable. For now though the other promisor remotes will be tried in the order they appear in the config file. Current limitations ------------------- * It is not possible to specify the order in which the promisor remotes are tried in other ways than the order in which they appear in the config file. It is also not possible to specify an order to be used when fetching from one remote and a different order when fetching from another remote. * It is not possible to push only specific objects to a promisor remote. It is not possible to push at the same time to multiple promisor remote in a specific order. * Dynamic object fetching will only ask promisor remotes for missing objects. We assume that promisor remotes have a complete view of the repository and can satisfy all such requests. * Repack essentially treats promisor and non-promisor packfiles as 2 distinct partitions and does not mix them. * Dynamic object fetching invokes fetch-pack once **for each item** because most algorithms stumble upon a missing object and need to have it resolved before continuing their work. This may incur significant overhead — and multiple authentication requests — if many objects are needed. * Dynamic object fetching currently uses the existing pack protocol V0 which means that each object is requested via fetch-pack. The server will send a full set of info/refs when the connection is established. If there are large number of refs, this may incur significant overhead. Future work ----------- * Improve the way to specify the order in which promisor remotes are tried. For example this could allow to specify explicitly something like: "When fetching from this remote, I want to use these promisor remotes in this order, though, when pushing or fetching to that remote, I want to use those promisor remotes in that order." * Allow pushing to promisor remotes. The user might want to work in a triangular work flow with multiple promisor remotes that each have an incomplete view of the repository. * Allow non-pathname-based filters to make use of packfile bitmaps (when present). This was just an omission during the initial implementation. * Investigate use of a long-running process to dynamically fetch a series of objects, such as proposed in [5,6] to reduce process startup and overhead costs. It would be nice if pack protocol V2 could allow that long-running process to make a series of requests over a single long-running connection. * Investigate pack protocol V2 to avoid the info/refs broadcast on each connection with the server to dynamically fetch missing objects. * Investigate the need to handle loose promisor objects. Objects in promisor packfiles are allowed to reference missing objects that can be dynamically fetched from the server. An assumption was made that loose objects are only created locally and therefore should not reference a missing object. We may need to revisit that assumption if, for example, we dynamically fetch a missing tree and store it as a loose object rather than a single object packfile. This does not necessarily mean we need to mark loose objects as promisor; it may be sufficient to relax the object lookup or is-promisor functions. Non-tasks --------- * Every time the subject of "demand loading blobs" comes up it seems that someone suggests that the server be allowed to "guess" and send additional objects that may be related to the requested objects. No work has gone into actually doing that; we’re just documenting that it is a common suggestion. We’re not sure how it would work and have no plans to work on it. It is valid for the server to send more objects than requested (even for a dynamic object fetch), but we are not building on that. Footnotes --------- [a] expensive-to-modify list of missing objects: Earlier in the design of partial clone we discussed the need for a single list of missing objects. This would essentially be a sorted linear list of OIDs that the were omitted by the server during a clone or subsequent fetches. This file would need to be loaded into memory on every object lookup. It would need to be read, updated, and re-written (like the .git/index) on every explicit "git fetch" command **and** on any dynamic object fetch. The cost to read, update, and write this file could add significant overhead to every command if there are many missing objects. For example, if there are 100M missing blobs, this file would be at least 2GiB on disk. With the "promisor" concept, we **infer** a missing object based upon the type of packfile that references it. Related links ------------- [0] <https://crbug.com/git/2> Bug#2: Partial Clone [1] <https://lore.kernel.org/git/[email protected]/> Subject: [RFC] Add support for downloading blobs on demand Date: Fri, 13 Jan 2017 10:52:53 -0500 [2] <https://lore.kernel.org/git/[email protected]/> Subject: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) Date: Fri, 29 Sep 2017 13:11:36 -0700 [3] <https://lore.kernel.org/git/[email protected]/> Subject: Proposal for missing blob support in Git repos Date: Wed, 26 Apr 2017 15:13:46 -0700 [4] <https://lore.kernel.org/git/[email protected]/> Subject: [PATCH 00/10] RFC Partial Clone and Fetch Date: Wed, 8 Mar 2017 18:50:29 +0000 [5] <https://lore.kernel.org/git/[email protected]/> Subject: [PATCH v7 00/10] refactor the filter process code into a reusable module Date: Fri, 5 May 2017 11:27:52 -0400 [6] <https://lore.kernel.org/git/[email protected]/> Subject: [RFC/PATCH v2 0/1] Add support for downloading blobs on demand Date: Fri, 14 Jul 2017 09:26:50 -0400
programming_docs
git gitfaq gitfaq ====== Name ---- gitfaq - Frequently asked questions about using Git Synopsis -------- gitfaq Description ----------- The examples in this FAQ assume a standard POSIX shell, like `bash` or `dash`, and a user, A U Thor, who has the account `author` on the hosting provider `git.example.org`. Configuration ------------- What should I put in `user.name`? You should put your personal name, generally a form using a given name and family name. For example, the current maintainer of Git uses "Junio C Hamano". This will be the name portion that is stored in every commit you make. This configuration doesn’t have any effect on authenticating to remote services; for that, see `credential.username` in [git-config[1]](git-config). What does `http.postBuffer` really do? This option changes the size of the buffer that Git uses when pushing data to a remote over HTTP or HTTPS. If the data is larger than this size, libcurl, which handles the HTTP support for Git, will use chunked transfer encoding since it isn’t known ahead of time what the size of the pushed data will be. Leaving this value at the default size is fine unless you know that either the remote server or a proxy in the middle doesn’t support HTTP/1.1 (which introduced the chunked transfer encoding) or is known to be broken with chunked data. This is often (erroneously) suggested as a solution for generic push problems, but since almost every server and proxy supports at least HTTP/1.1, raising this value usually doesn’t solve most push problems. A server or proxy that didn’t correctly support HTTP/1.1 and chunked transfer encoding wouldn’t be that useful on the Internet today, since it would break lots of traffic. Note that increasing this value will increase the memory used on every relevant push that Git does over HTTP or HTTPS, since the entire buffer is allocated regardless of whether or not it is all used. Thus, it’s best to leave it at the default unless you are sure you need a different value. How do I configure a different editor? If you haven’t specified an editor specifically for Git, it will by default use the editor you’ve configured using the `VISUAL` or `EDITOR` environment variables, or if neither is specified, the system default (which is usually `vi`). Since some people find `vi` difficult to use or prefer a different editor, it may be desirable to change the editor used. If you want to configure a general editor for most programs which need one, you can edit your shell configuration (e.g., `~/.bashrc` or `~/.zshenv`) to contain a line setting the `EDITOR` or `VISUAL` environment variable to an appropriate value. For example, if you prefer the editor `nano`, then you could write the following: ``` export VISUAL=nano ``` If you want to configure an editor specifically for Git, you can either set the `core.editor` configuration value or the `GIT_EDITOR` environment variable. You can see [git-var[1]](git-var) for details on the order in which these options are consulted. Note that in all cases, the editor value will be passed to the shell, so any arguments containing spaces should be appropriately quoted. Additionally, if your editor normally detaches from the terminal when invoked, you should specify it with an argument that makes it not do that, or else Git will not see any changes. An example of a configuration addressing both of these issues on Windows would be the configuration `"C:\Program Files\Vim\gvim.exe" --nofork`, which quotes the filename with spaces and specifies the `--nofork` option to avoid backgrounding the process. Credentials ----------- How do I specify my credentials when pushing over HTTP? The easiest way to do this is to use a credential helper via the `credential.helper` configuration. Most systems provide a standard choice to integrate with the system credential manager. For example, Git for Windows provides the `wincred` credential manager, macOS has the `osxkeychain` credential manager, and Unix systems with a standard desktop environment can use the `libsecret` credential manager. All of these store credentials in an encrypted store to keep your passwords or tokens secure. In addition, you can use the `store` credential manager which stores in a file in your home directory, or the `cache` credential manager, which does not permanently store your credentials, but does prevent you from being prompted for them for a certain period of time. You can also just enter your password when prompted. While it is possible to place the password (which must be percent-encoded) in the URL, this is not particularly secure and can lead to accidental exposure of credentials, so it is not recommended. How do I read a password or token from an environment variable? The `credential.helper` configuration option can also take an arbitrary shell command that produces the credential protocol on standard output. This is useful when passing credentials into a container, for example. Such a shell command can be specified by starting the option value with an exclamation point. If your password or token were stored in the `GIT_TOKEN`, you could run the following command to set your credential helper: ``` $ git config credential.helper \ '!f() { echo username=author; echo "password=$GIT_TOKEN"; };f' ``` How do I change the password or token I’ve saved in my credential manager? Usually, if the password or token is invalid, Git will erase it and prompt for a new one. However, there are times when this doesn’t always happen. To change the password or token, you can erase the existing credentials and then Git will prompt for new ones. To erase credentials, use a syntax like the following (substituting your username and the hostname): ``` $ echo url=https://[email protected] | git credential reject ``` How do I use multiple accounts with the same hosting provider using HTTP? Usually the easiest way to distinguish between these accounts is to use the username in the URL. For example, if you have the accounts `author` and `committer` on `git.example.org`, you can use the URLs <https://[email protected]/org1/project1.git> and <https://[email protected]/org2/project2.git>. This way, when you use a credential helper, it will automatically try to look up the correct credentials for your account. If you already have a remote set up, you can change the URL with something like `git remote set-url origin https://[email protected]/org1/project1.git` (see [git-remote[1]](git-remote) for details). How do I use multiple accounts with the same hosting provider using SSH? With most hosting providers that support SSH, a single key pair uniquely identifies a user. Therefore, to use multiple accounts, it’s necessary to create a key pair for each account. If you’re using a reasonably modern OpenSSH version, you can create a new key pair with something like `ssh-keygen -t ed25519 -f ~/.ssh/id_committer`. You can then register the public key (in this case, `~/.ssh/id_committer.pub`; note the `.pub`) with the hosting provider. Most hosting providers use a single SSH account for pushing; that is, all users push to the `git` account (e.g., `[email protected]`). If that’s the case for your provider, you can set up multiple aliases in SSH to make it clear which key pair to use. For example, you could write something like the following in `~/.ssh/config`, substituting the proper private key file: ``` # This is the account for author on git.example.org. Host example_author HostName git.example.org User git # This is the key pair registered for author with git.example.org. IdentityFile ~/.ssh/id_author IdentitiesOnly yes # This is the account for committer on git.example.org. Host example_committer HostName git.example.org User git # This is the key pair registered for committer with git.example.org. IdentityFile ~/.ssh/id_committer IdentitiesOnly yes ``` Then, you can adjust your push URL to use `git@example_author` or `git@example_committer` instead of `[email protected]` (e.g., `git remote set-url git@example_author:org1/project1.git`). Common issues ------------- I’ve made a mistake in the last commit. How do I change it? You can make the appropriate change to your working tree, run `git add <file>` or `git rm <file>`, as appropriate, to stage it, and then `git commit --amend`. Your change will be included in the commit, and you’ll be prompted to edit the commit message again; if you wish to use the original message verbatim, you can use the `--no-edit` option to `git commit` in addition, or just save and quit when your editor opens. I’ve made a change with a bug and it’s been included in the main branch. How should I undo it? The usual way to deal with this is to use `git revert`. This preserves the history that the original change was made and was a valuable contribution, but also introduces a new commit that undoes those changes because the original had a problem. The commit message of the revert indicates the commit which was reverted and is usually edited to include an explanation as to why the revert was made. How do I ignore changes to a tracked file? Git doesn’t provide a way to do this. The reason is that if Git needs to overwrite this file, such as during a checkout, it doesn’t know whether the changes to the file are precious and should be kept, or whether they are irrelevant and can safely be destroyed. Therefore, it has to take the safe route and always preserve them. It’s tempting to try to use certain features of `git update-index`, namely the assume-unchanged and skip-worktree bits, but these don’t work properly for this purpose and shouldn’t be used this way. If your goal is to modify a configuration file, it can often be helpful to have a file checked into the repository which is a template or set of defaults which can then be copied alongside and modified as appropriate. This second, modified file is usually ignored to prevent accidentally committing it. I asked Git to ignore various files, yet they are still tracked A `gitignore` file ensures that certain file(s) which are not tracked by Git remain untracked. However, sometimes particular file(s) may have been tracked before adding them into the `.gitignore`, hence they still remain tracked. To untrack and ignore files/patterns, use `git rm --cached <file/pattern>` and add a pattern to `.gitignore` that matches the <file>. See [gitignore[5]](gitignore) for details. How do I know if I want to do a fetch or a pull? A fetch stores a copy of the latest changes from the remote repository, without modifying the working tree or current branch. You can then at your leisure inspect, merge, rebase on top of, or ignore the upstream changes. A pull consists of a fetch followed immediately by either a merge or rebase. See [git-pull[1]](git-pull). Merging and rebasing -------------------- What kinds of problems can occur when merging long-lived branches with squash merges? In general, there are a variety of problems that can occur when using squash merges to merge two branches multiple times. These can include seeing extra commits in `git log` output, with a GUI, or when using the `...` notation to express a range, as well as the possibility of needing to re-resolve conflicts again and again. When Git does a normal merge between two branches, it considers exactly three points: the two branches and a third commit, called the `merge base`, which is usually the common ancestor of the commits. The result of the merge is the sum of the changes between the merge base and each head. When you merge two branches with a regular merge commit, this results in a new commit which will end up as a merge base when they’re merged again, because there is now a new common ancestor. Git doesn’t have to consider changes that occurred before the merge base, so you don’t have to re-resolve any conflicts you resolved before. When you perform a squash merge, a merge commit isn’t created; instead, the changes from one side are applied as a regular commit to the other side. This means that the merge base for these branches won’t have changed, and so when Git goes to perform its next merge, it considers all of the changes that it considered the last time plus the new changes. That means any conflicts may need to be re-resolved. Similarly, anything using the `...` notation in `git diff`, `git log`, or a GUI will result in showing all of the changes since the original merge base. As a consequence, if you want to merge two long-lived branches repeatedly, it’s best to always use a regular merge commit. If I make a change on two branches but revert it on one, why does the merge of those branches include the change? By default, when Git does a merge, it uses a strategy called the `ort` strategy, which does a fancy three-way merge. In such a case, when Git performs the merge, it considers exactly three points: the two heads and a third point, called the `merge base`, which is usually the common ancestor of those commits. Git does not consider the history or the individual commits that have happened on those branches at all. As a result, if both sides have a change and one side has reverted that change, the result is to include the change. This is because the code has changed on one side and there is no net change on the other, and in this scenario, Git adopts the change. If this is a problem for you, you can do a rebase instead, rebasing the branch with the revert onto the other branch. A rebase in this scenario will revert the change, because a rebase applies each individual commit, including the revert. Note that rebases rewrite history, so you should avoid rebasing published branches unless you’re sure you’re comfortable with that. See the NOTES section in [git-rebase[1]](git-rebase) for more details. Hooks ----- How do I use hooks to prevent users from making certain changes? The only safe place to make these changes is on the remote repository (i.e., the Git server), usually in the `pre-receive` hook or in a continuous integration (CI) system. These are the locations in which policy can be enforced effectively. It’s common to try to use `pre-commit` hooks (or, for commit messages, `commit-msg` hooks) to check these things, which is great if you’re working as a solo developer and want the tooling to help you. However, using hooks on a developer machine is not effective as a policy control because a user can bypass these hooks with `--no-verify` without being noticed (among various other ways). Git assumes that the user is in control of their local repositories and doesn’t try to prevent this or tattle on the user. In addition, some advanced users find `pre-commit` hooks to be an impediment to workflows that use temporary commits to stage work in progress or that create fixup commits, so it’s better to push these kinds of checks to the server anyway. Cross-platform issues --------------------- I’m on Windows and my text files are detected as binary. Git works best when you store text files as UTF-8. Many programs on Windows support UTF-8, but some do not and only use the little-endian UTF-16 format, which Git detects as binary. If you can’t use UTF-8 with your programs, you can specify a working tree encoding that indicates which encoding your files should be checked out with, while still storing these files as UTF-8 in the repository. This allows tools like [git-diff[1]](git-diff) to work as expected, while still allowing your tools to work. To do so, you can specify a [gitattributes[5]](gitattributes) pattern with the `working-tree-encoding` attribute. For example, the following pattern sets all C files to use UTF-16LE-BOM, which is a common encoding on Windows: ``` *.c working-tree-encoding=UTF-16LE-BOM ``` You will need to run `git add --renormalize` to have this take effect. Note that if you are making these changes on a project that is used across platforms, you’ll probably want to make it in a per-user configuration file or in the one in `$GIT_DIR/info/attributes`, since making it in a `.gitattributes` file in the repository will apply to all users of the repository. See the following entry for information about normalizing line endings as well, and see [gitattributes[5]](gitattributes) for more information about attribute files. I’m on Windows and git diff shows my files as having a `^M` at the end. By default, Git expects files to be stored with Unix line endings. As such, the carriage return (`^M`) that is part of a Windows line ending is shown because it is considered to be trailing whitespace. Git defaults to showing trailing whitespace only on new lines, not existing ones. You can store the files in the repository with Unix line endings and convert them automatically to your platform’s line endings. To do that, set the configuration option `core.eol` to `native` and see the following entry for information about how to configure files as text or binary. You can also control this behavior with the `core.whitespace` setting if you don’t wish to remove the carriage returns from your line endings. Why do I have a file that’s always modified? Internally, Git always stores file names as sequences of bytes and doesn’t perform any encoding or case folding. However, Windows and macOS by default both perform case folding on file names. As a result, it’s possible to end up with multiple files or directories whose names differ only in case. Git can handle this just fine, but the file system can store only one of these files, so when Git reads the other file to see its contents, it looks modified. It’s best to remove one of the files such that you only have one file. You can do this with commands like the following (assuming two files `AFile.txt` and `afile.txt`) on an otherwise clean working tree: ``` $ git rm --cached AFile.txt $ git commit -m 'Remove files conflicting in case' $ git checkout . ``` This avoids touching the disk, but removes the additional file. Your project may prefer to adopt a naming convention, such as all-lowercase names, to avoid this problem from occurring again; such a convention can be checked using a `pre-receive` hook or as part of a continuous integration (CI) system. It is also possible for perpetually modified files to occur on any platform if a smudge or clean filter is in use on your system but a file was previously committed without running the smudge or clean filter. To fix this, run the following on an otherwise clean working tree: ``` $ git add --renormalize . ``` What’s the recommended way to store files in Git? While Git can store and handle any file of any type, there are some settings that work better than others. In general, we recommend that text files be stored in UTF-8 without a byte-order mark (BOM) with LF (Unix-style) endings. We also recommend the use of UTF-8 (again, without BOM) in commit messages. These are the settings that work best across platforms and with tools such as `git diff` and `git merge`. Additionally, if you have a choice between storage formats that are text based or non-text based, we recommend storing files in the text format and, if necessary, transforming them into the other format. For example, a text-based SQL dump with one record per line will work much better for diffing and merging than an actual database file. Similarly, text-based formats such as Markdown and AsciiDoc will work better than binary formats such as Microsoft Word and PDF. Similarly, storing binary dependencies (e.g., shared libraries or JAR files) or build products in the repository is generally not recommended. Dependencies and build products are best stored on an artifact or package server with only references, URLs, and hashes stored in the repository. We also recommend setting a [gitattributes[5]](gitattributes) file to explicitly mark which files are text and which are binary. If you want Git to guess, you can set the attribute `text=auto`. For example, the following might be appropriate in some projects: ``` # By default, guess. * text=auto # Mark all C files as text. *.c text # Mark all JPEG files as binary. *.jpg binary ``` These settings help tools pick the right format for output such as patches and result in files being checked out in the appropriate line ending for the platform.
programming_docs
git git-fetch git-fetch ========= Name ---- git-fetch - Download objects and refs from another repository Synopsis -------- ``` git fetch [<options>] [<repository> [<refspec>…​]] git fetch [<options>] <group> git fetch --multiple [<options>] [(<repository> | <group>)…​] git fetch --all [<options>] ``` Description ----------- Fetch branches and/or tags (collectively, "refs") from one or more other repositories, along with the objects necessary to complete their histories. Remote-tracking branches are updated (see the description of <refspec> below for ways to control this behavior). By default, any tag that points into the histories being fetched is also fetched; the effect is to fetch tags that point at branches that you are interested in. This default behavior can be changed by using the --tags or --no-tags options or by configuring remote.<name>.tagOpt. By using a refspec that fetches tags explicitly, you can fetch tags that do not point into branches you are interested in as well. `git fetch` can fetch from either a single named repository or URL, or from several repositories at once if <group> is given and there is a remotes.<group> entry in the configuration file. (See [git-config[1]](git-config)). When no remote is specified, by default the `origin` remote will be used, unless there’s an upstream branch configured for the current branch. The names of refs that are fetched, together with the object names they point at, are written to `.git/FETCH_HEAD`. This information may be used by scripts or other git commands, such as [git-pull[1]](git-pull). Options ------- --all Fetch all remotes. -a --append Append ref names and object names of fetched refs to the existing contents of `.git/FETCH_HEAD`. Without this option old data in `.git/FETCH_HEAD` will be overwritten. --atomic Use an atomic transaction to update local refs. Either all refs are updated, or on error, no refs are updated. --depth=<depth> Limit fetching to the specified number of commits from the tip of each remote branch history. If fetching to a `shallow` repository created by `git clone` with `--depth=<depth>` option (see [git-clone[1]](git-clone)), deepen or shorten the history to the specified number of commits. Tags for the deepened commits are not fetched. --deepen=<depth> Similar to --depth, except it specifies the number of commits from the current shallow boundary instead of from the tip of each remote branch history. --shallow-since=<date> Deepen or shorten the history of a shallow repository to include all reachable commits after <date>. --shallow-exclude=<revision> Deepen or shorten the history of a shallow repository to exclude commits reachable from a specified remote branch or tag. This option can be specified multiple times. --unshallow If the source repository is complete, convert a shallow repository to a complete one, removing all the limitations imposed by shallow repositories. If the source repository is shallow, fetch as much as possible so that the current repository has the same history as the source repository. --update-shallow By default when fetching from a shallow repository, `git fetch` refuses refs that require updating .git/shallow. This option updates .git/shallow and accept such refs. --negotiation-tip=<commit|glob> By default, Git will report, to the server, commits reachable from all local refs to find common commits in an attempt to reduce the size of the to-be-received packfile. If specified, Git will only report commits reachable from the given tips. This is useful to speed up fetches when the user knows which local ref is likely to have commits in common with the upstream ref being fetched. This option may be specified more than once; if so, Git will report commits reachable from any of the given commits. The argument to this option may be a glob on ref names, a ref, or the (possibly abbreviated) SHA-1 of a commit. Specifying a glob is equivalent to specifying this option multiple times, one for each matching ref name. See also the `fetch.negotiationAlgorithm` and `push.negotiate` configuration variables documented in [git-config[1]](git-config), and the `--negotiate-only` option below. --negotiate-only Do not fetch anything from the server, and instead print the ancestors of the provided `--negotiation-tip=*` arguments, which we have in common with the server. This is incompatible with `--recurse-submodules=[yes|on-demand]`. Internally this is used to implement the `push.negotiate` option, see [git-config[1]](git-config). --dry-run Show what would be done, without making any changes. --[no-]write-fetch-head Write the list of remote refs fetched in the `FETCH_HEAD` file directly under `$GIT_DIR`. This is the default. Passing `--no-write-fetch-head` from the command line tells Git not to write the file. Under `--dry-run` option, the file is never written. -f --force When `git fetch` is used with `<src>:<dst>` refspec it may refuse to update the local branch as discussed in the `<refspec>` part below. This option overrides that check. -k --keep Keep downloaded pack. --multiple Allow several <repository> and <group> arguments to be specified. No <refspec>s may be specified. --[no-]auto-maintenance --[no-]auto-gc Run `git maintenance run --auto` at the end to perform automatic repository maintenance if needed. (`--[no-]auto-gc` is a synonym.) This is enabled by default. --[no-]write-commit-graph Write a commit-graph after fetching. This overrides the config setting `fetch.writeCommitGraph`. --prefetch Modify the configured refspec to place all refs into the `refs/prefetch/` namespace. See the `prefetch` task in [git-maintenance[1]](git-maintenance). -p --prune Before fetching, remove any remote-tracking references that no longer exist on the remote. Tags are not subject to pruning if they are fetched only because of the default tag auto-following or due to a --tags option. However, if tags are fetched due to an explicit refspec (either on the command line or in the remote configuration, for example if the remote was cloned with the --mirror option), then they are also subject to pruning. Supplying `--prune-tags` is a shorthand for providing the tag refspec. See the PRUNING section below for more details. -P --prune-tags Before fetching, remove any local tags that no longer exist on the remote if `--prune` is enabled. This option should be used more carefully, unlike `--prune` it will remove any local references (local tags) that have been created. This option is a shorthand for providing the explicit tag refspec along with `--prune`, see the discussion about that in its documentation. See the PRUNING section below for more details. -n --no-tags By default, tags that point at objects that are downloaded from the remote repository are fetched and stored locally. This option disables this automatic tag following. The default behavior for a remote may be specified with the remote.<name>.tagOpt setting. See [git-config[1]](git-config). --refetch Instead of negotiating with the server to avoid transferring commits and associated objects that are already present locally, this option fetches all objects as a fresh clone would. Use this to reapply a partial clone filter from configuration or using `--filter=` when the filter definition has changed. Automatic post-fetch maintenance will perform object database pack consolidation to remove any duplicate objects. --refmap=<refspec> When fetching refs listed on the command line, use the specified refspec (can be given more than once) to map the refs to remote-tracking branches, instead of the values of `remote.*.fetch` configuration variables for the remote repository. Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. See section on "Configured Remote-tracking Branches" for details. -t --tags Fetch all tags from the remote (i.e., fetch remote tags `refs/tags/*` into local tags with the same name), in addition to whatever else would otherwise be fetched. Using this option alone does not subject tags to pruning, even if --prune is used (though tags may be pruned anyway if they are also the destination of an explicit refspec; see `--prune`). --recurse-submodules[=yes|on-demand|no] This option controls if and under what conditions new commits of submodules should be fetched too. When recursing through submodules, `git fetch` always attempts to fetch "changed" submodules, that is, a submodule that has commits that are referenced by a newly fetched superproject commit but are missing in the local submodule clone. A changed submodule can be fetched as long as it is present locally e.g. in `$GIT_DIR/modules/` (see [gitsubmodules[7]](gitsubmodules)); if the upstream adds a new submodule, that submodule cannot be fetched until it is cloned e.g. by `git submodule update`. When set to `on-demand`, only changed submodules are fetched. When set to `yes`, all populated submodules are fetched and submodules that are both unpopulated and changed are fetched. When set to `no`, submodules are never fetched. When unspecified, this uses the value of `fetch.recurseSubmodules` if it is set (see [git-config[1]](git-config)), defaulting to `on-demand` if unset. When this option is used without any value, it defaults to `yes`. -j --jobs=<n> Number of parallel children to be used for all forms of fetching. If the `--multiple` option was specified, the different remotes will be fetched in parallel. If multiple submodules are fetched, they will be fetched in parallel. To control them independently, use the config settings `fetch.parallel` and `submodule.fetchJobs` (see [git-config[1]](git-config)). Typically, parallel recursive and multi-remote fetches will be faster. By default fetches are performed sequentially, not in parallel. --no-recurse-submodules Disable recursive fetching of submodules (this has the same effect as using the `--recurse-submodules=no` option). --set-upstream If the remote is fetched successfully, add upstream (tracking) reference, used by argument-less [git-pull[1]](git-pull) and other commands. For more information, see `branch.<name>.merge` and `branch.<name>.remote` in [git-config[1]](git-config). --submodule-prefix=<path> Prepend <path> to paths printed in informative messages such as "Fetching submodule foo". This option is used internally when recursing over submodules. --recurse-submodules-default=[yes|on-demand] This option is used internally to temporarily provide a non-negative default value for the --recurse-submodules option. All other methods of configuring fetch’s submodule recursion (such as settings in [gitmodules[5]](gitmodules) and [git-config[1]](git-config)) override this option, as does specifying --[no-]recurse-submodules directly. -u --update-head-ok By default `git fetch` refuses to update the head which corresponds to the current branch. This flag disables the check. This is purely for the internal use for `git pull` to communicate with `git fetch`, and unless you are implementing your own Porcelain you are not supposed to use it. --upload-pack <upload-pack> When given, and the repository to fetch from is handled by `git fetch-pack`, `--exec=<upload-pack>` is passed to the command to specify non-default path for the command run on the other end. -q --quiet Pass --quiet to git-fetch-pack and silence any other internally used git commands. Progress is not reported to the standard error stream. -v --verbose Be verbose. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. -o <option> --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. The server’s handling of server options, including unknown ones, is server-specific. When multiple `--server-option=<option>` are given, they are all sent to the other side in the order listed on the command line. --show-forced-updates By default, git checks if a branch is force-updated during fetch. This can be disabled through fetch.showForcedUpdates, but the --show-forced-updates option guarantees this check occurs. See [git-config[1]](git-config). --no-show-forced-updates By default, git checks if a branch is force-updated during fetch. Pass --no-show-forced-updates or set fetch.showForcedUpdates to false to skip this check for performance reasons. If used during `git-pull` the --ff-only option will still check for forced updates before attempting a fast-forward update. See [git-config[1]](git-config). -4 --ipv4 Use IPv4 addresses only, ignoring IPv6 addresses. -6 --ipv6 Use IPv6 addresses only, ignoring IPv4 addresses. <repository> The "remote" repository that is the source of a fetch or pull operation. This parameter can be either a URL (see the section [GIT URLS](#URLS) below) or the name of a remote (see the section [REMOTES](#REMOTES) below). <group> A name referring to a list of repositories as the value of remotes.<group> in the configuration file. (See [git-config[1]](git-config)). <refspec> Specifies which refs to fetch and which local refs to update. When no <refspec>s appear on the command line, the refs to fetch are read from `remote.<repository>.fetch` variables instead (see [CONFIGURED REMOTE-TRACKING BRANCHES](#CRTB) below). The format of a <refspec> parameter is an optional plus `+`, followed by the source <src>, followed by a colon `:`, followed by the destination ref <dst>. The colon can be omitted when <dst> is empty. <src> is typically a ref, but it can also be a fully spelled hex object name. A <refspec> may contain a `*` in its <src> to indicate a simple pattern match. Such a refspec functions like a glob that matches any ref with the same prefix. A pattern <refspec> must have a `*` in both the <src> and <dst>. It will map refs to the destination by replacing the `*` with the contents matched from the source. If a refspec is prefixed by `^`, it will be interpreted as a negative refspec. Rather than specifying which refs to fetch or which local refs to update, such a refspec will instead specify refs to exclude. A ref will be considered to match if it matches at least one positive refspec, and does not match any negative refspec. Negative refspecs can be useful to restrict the scope of a pattern refspec so that it will not include specific refs. Negative refspecs can themselves be pattern refspecs. However, they may only contain a <src> and do not specify a <dst>. Fully spelled out hex object names are also not supported. `tag <tag>` means the same as `refs/tags/<tag>:refs/tags/<tag>`; it requests fetching everything up to the given tag. The remote ref that matches <src> is fetched, and if <dst> is not an empty string, an attempt is made to update the local ref that matches it. Whether that update is allowed without `--force` depends on the ref namespace it’s being fetched to, the type of object being fetched, and whether the update is considered to be a fast-forward. Generally, the same rules apply for fetching as when pushing, see the `<refspec>...` section of [git-push[1]](git-push) for what those are. Exceptions to those rules particular to `git fetch` are noted below. Until Git version 2.20, and unlike when pushing with [git-push[1]](git-push), any updates to `refs/tags/*` would be accepted without `+` in the refspec (or `--force`). When fetching, we promiscuously considered all tag updates from a remote to be forced fetches. Since Git version 2.20, fetching to update `refs/tags/*` works the same way as when pushing. I.e. any updates will be rejected without `+` in the refspec (or `--force`). Unlike when pushing with [git-push[1]](git-push), any updates outside of `refs/{tags,heads}/*` will be accepted without `+` in the refspec (or `--force`), whether that’s swapping e.g. a tree object for a blob, or a commit for another commit that’s doesn’t have the previous commit as an ancestor etc. Unlike when pushing with [git-push[1]](git-push), there is no configuration which’ll amend these rules, and nothing like a `pre-fetch` hook analogous to the `pre-receive` hook. As with pushing with [git-push[1]](git-push), all of the rules described above about what’s not allowed as an update can be overridden by adding an the optional leading `+` to a refspec (or using `--force` command line option). The only exception to this is that no amount of forcing will make the `refs/heads/*` namespace accept a non-commit object. | | | | --- | --- | | Note | When the remote branch you want to fetch is known to be rewound and rebased regularly, it is expected that its new tip will not be descendant of its previous tip (as stored in your remote-tracking branch the last time you fetched). You would want to use the `+` sign to indicate non-fast-forward updates will be needed for such branches. There is no way to determine or declare that a branch will be made available in a repository with this behavior; the pulling user simply must know this is the expected usage pattern for a branch. | --stdin Read refspecs, one per line, from stdin in addition to those provided as arguments. The "tag <name>" format is not supported. Git urls -------- In general, URLs contain information about the transport protocol, the address of the remote server, and the path to the repository. Depending on the transport protocol, some of this information may be absent. Git supports ssh, git, http, and https protocols (in addition, ftp, and ftps can be used for fetching, but this is inefficient and deprecated; do not use it). The native transport (i.e. git:// URL) does no authentication and should be used with caution on unsecured networks. The following syntaxes may be used with them: * ssh://[user@]host.xz[:port]/path/to/repo.git/ * git://host.xz[:port]/path/to/repo.git/ * http[s]://host.xz[:port]/path/to/repo.git/ * ftp[s]://host.xz[:port]/path/to/repo.git/ An alternative scp-like syntax may also be used with the ssh protocol: * [user@]host.xz:path/to/repo.git/ This syntax is only recognized if there are no slashes before the first colon. This helps differentiate a local path that contains a colon. For example the local path `foo:bar` could be specified as an absolute path or `./foo:bar` to avoid being misinterpreted as an ssh url. The ssh and git protocols additionally support ~username expansion: * ssh://[user@]host.xz[:port]/~[user]/path/to/repo.git/ * git://host.xz[:port]/~[user]/path/to/repo.git/ * [user@]host.xz:/~[user]/path/to/repo.git/ For local repositories, also supported by Git natively, the following syntaxes may be used: * /path/to/repo.git/ * file:///path/to/repo.git/ These two syntaxes are mostly equivalent, except when cloning, when the former implies --local option. See [git-clone[1]](git-clone) for details. `git clone`, `git fetch` and `git pull`, but not `git push`, will also accept a suitable bundle file. See [git-bundle[1]](git-bundle). When Git doesn’t know how to handle a certain transport protocol, it attempts to use the `remote-<transport>` remote helper, if one exists. To explicitly request a remote helper, the following syntax may be used: * <transport>::<address> where <address> may be a path, a server and path, or an arbitrary URL-like string recognized by the specific remote helper being invoked. See [gitremote-helpers[7]](gitremote-helpers) for details. If there are a large number of similarly-named remote repositories and you want to use a different format for them (such that the URLs you use will be rewritten into URLs that work), you can create a configuration section of the form: ``` [url "<actual url base>"] insteadOf = <other url base> ``` For example, with this: ``` [url "git://git.host.xz/"] insteadOf = host.xz:/path/to/ insteadOf = work: ``` a URL like "work:repo.git" or like "host.xz:/path/to/repo.git" will be rewritten in any context that takes a URL to be "git://git.host.xz/repo.git". If you want to rewrite URLs for push only, you can create a configuration section of the form: ``` [url "<actual url base>"] pushInsteadOf = <other url base> ``` For example, with this: ``` [url "ssh://example.org/"] pushInsteadOf = git://example.org/ ``` a URL like "git://example.org/path/to/repo.git" will be rewritten to "ssh://example.org/path/to/repo.git" for pushes, but pulls will still use the original URL. Remotes ------- The name of one of the following can be used instead of a URL as `<repository>` argument: * a remote in the Git configuration file: `$GIT_DIR/config`, * a file in the `$GIT_DIR/remotes` directory, or * a file in the `$GIT_DIR/branches` directory. All of these also allow you to omit the refspec from the command line because they each contain a refspec which git will use by default. ### Named remote in configuration file You can choose to provide the name of a remote which you had previously configured using [git-remote[1]](git-remote), [git-config[1]](git-config) or even by a manual edit to the `$GIT_DIR/config` file. The URL of this remote will be used to access the repository. The refspec of this remote will be used by default when you do not provide a refspec on the command line. The entry in the config file would appear like this: ``` [remote "<name>"] url = <URL> pushurl = <pushurl> push = <refspec> fetch = <refspec> ``` The `<pushurl>` is used for pushes only. It is optional and defaults to `<URL>`. ### Named file in `$GIT_DIR/remotes` You can choose to provide the name of a file in `$GIT_DIR/remotes`. The URL in this file will be used to access the repository. The refspec in this file will be used as default when you do not provide a refspec on the command line. This file should have the following format: ``` URL: one of the above URL format Push: <refspec> Pull: <refspec> ``` `Push:` lines are used by `git push` and `Pull:` lines are used by `git pull` and `git fetch`. Multiple `Push:` and `Pull:` lines may be specified for additional branch mappings. ### Named file in `$GIT_DIR/branches` You can choose to provide the name of a file in `$GIT_DIR/branches`. The URL in this file will be used to access the repository. This file should have the following format: ``` <URL>#<head> ``` `<URL>` is required; `#<head>` is optional. Depending on the operation, git will use one of the following refspecs, if you don’t provide one on the command line. `<branch>` is the name of this file in `$GIT_DIR/branches` and `<head>` defaults to `master`. git fetch uses: ``` refs/heads/<head>:refs/heads/<branch> ``` git push uses: ``` HEAD:refs/heads/<head> ``` Configured remote-tracking branches ----------------------------------- You often interact with the same remote repository by regularly and repeatedly fetching from it. In order to keep track of the progress of such a remote repository, `git fetch` allows you to configure `remote.<repository>.fetch` configuration variables. Typically such a variable may look like this: ``` [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* ``` This configuration is used in two ways: * When `git fetch` is run without specifying what branches and/or tags to fetch on the command line, e.g. `git fetch origin` or `git fetch`, `remote.<repository>.fetch` values are used as the refspecs—​they specify which refs to fetch and which local refs to update. The example above will fetch all branches that exist in the `origin` (i.e. any ref that matches the left-hand side of the value, `refs/heads/*`) and update the corresponding remote-tracking branches in the `refs/remotes/origin/*` hierarchy. * When `git fetch` is run with explicit branches and/or tags to fetch on the command line, e.g. `git fetch origin master`, the <refspec>s given on the command line determine what are to be fetched (e.g. `master` in the example, which is a short-hand for `master:`, which in turn means "fetch the `master` branch but I do not explicitly say what remote-tracking branch to update with it from the command line"), and the example command will fetch `only` the `master` branch. The `remote.<repository>.fetch` values determine which remote-tracking branch, if any, is updated. When used in this way, the `remote.<repository>.fetch` values do not have any effect in deciding `what` gets fetched (i.e. the values are not used as refspecs when the command-line lists refspecs); they are only used to decide `where` the refs that are fetched are stored by acting as a mapping. The latter use of the `remote.<repository>.fetch` values can be overridden by giving the `--refmap=<refspec>` parameter(s) on the command line. Pruning ------- Git has a default disposition of keeping data unless it’s explicitly thrown away; this extends to holding onto local references to branches on remotes that have themselves deleted those branches. If left to accumulate, these stale references might make performance worse on big and busy repos that have a lot of branch churn, and e.g. make the output of commands like `git branch -a --contains <commit>` needlessly verbose, as well as impacting anything else that’ll work with the complete set of known references. These remote-tracking references can be deleted as a one-off with either of: ``` # While fetching $ git fetch --prune <name> # Only prune, don't fetch $ git remote prune <name> ``` To prune references as part of your normal workflow without needing to remember to run that, set `fetch.prune` globally, or `remote.<name>.prune` per-remote in the config. See [git-config[1]](git-config). Here’s where things get tricky and more specific. The pruning feature doesn’t actually care about branches, instead it’ll prune local ←→ remote-references as a function of the refspec of the remote (see `<refspec>` and [CONFIGURED REMOTE-TRACKING BRANCHES](#CRTB) above). Therefore if the refspec for the remote includes e.g. `refs/tags/*:refs/tags/*`, or you manually run e.g. `git fetch --prune <name> "refs/tags/*:refs/tags/*"` it won’t be stale remote tracking branches that are deleted, but any local tag that doesn’t exist on the remote. This might not be what you expect, i.e. you want to prune remote `<name>`, but also explicitly fetch tags from it, so when you fetch from it you delete all your local tags, most of which may not have come from the `<name>` remote in the first place. So be careful when using this with a refspec like `refs/tags/*:refs/tags/*`, or any other refspec which might map references from multiple remotes to the same local namespace. Since keeping up-to-date with both branches and tags on the remote is a common use-case the `--prune-tags` option can be supplied along with `--prune` to prune local tags that don’t exist on the remote, and force-update those tags that differ. Tag pruning can also be enabled with `fetch.pruneTags` or `remote.<name>.pruneTags` in the config. See [git-config[1]](git-config). The `--prune-tags` option is equivalent to having `refs/tags/*:refs/tags/*` declared in the refspecs of the remote. This can lead to some seemingly strange interactions: ``` # These both fetch tags $ git fetch --no-tags origin 'refs/tags/*:refs/tags/*' $ git fetch --no-tags --prune-tags origin ``` The reason it doesn’t error out when provided without `--prune` or its config versions is for flexibility of the configured versions, and to maintain a 1=1 mapping between what the command line flags do, and what the configuration versions do. It’s reasonable to e.g. configure `fetch.pruneTags=true` in `~/.gitconfig` to have tags pruned whenever `git fetch --prune` is run, without making every invocation of `git fetch` without `--prune` an error. Pruning tags with `--prune-tags` also works when fetching a URL instead of a named remote. These will all prune tags not found on origin: ``` $ git fetch origin --prune --prune-tags $ git fetch origin --prune 'refs/tags/*:refs/tags/*' $ git fetch <url of origin> --prune --prune-tags $ git fetch <url of origin> --prune 'refs/tags/*:refs/tags/*' ``` Output ------ The output of "git fetch" depends on the transport method used; this section describes the output when fetching over the Git protocol (either locally or via ssh) and Smart HTTP protocol. The status of the fetch is output in tabular form, with each line representing the status of a single ref. Each line is of the form: ``` <flag> <summary> <from> -> <to> [<reason>] ``` The status of up-to-date refs is shown only if the --verbose option is used. In compact output mode, specified with configuration variable fetch.output, if either entire `<from>` or `<to>` is found in the other string, it will be substituted with `*` in the other string. For example, `master -> origin/master` becomes `master -> origin/*`. flag A single character indicating the status of the ref: (space) for a successfully fetched fast-forward; `+` for a successful forced update; `-` for a successfully pruned ref; `t` for a successful tag update; `*` for a successfully fetched new ref; `!` for a ref that was rejected or failed to update; and `=` for a ref that was up to date and did not need fetching. summary For a successfully fetched ref, the summary shows the old and new values of the ref in a form suitable for using as an argument to `git log` (this is `<old>..<new>` in most cases, and `<old>...<new>` for forced non-fast-forward updates). from The name of the remote ref being fetched from, minus its `refs/<type>/` prefix. In the case of deletion, the name of the remote ref is "(none)". to The name of the local ref being updated, minus its `refs/<type>/` prefix. reason A human-readable explanation. In the case of successfully fetched refs, no explanation is needed. For a failed ref, the reason for failure is described. Examples -------- * Update the remote-tracking branches: ``` $ git fetch origin ``` The above command copies all branches from the remote refs/heads/ namespace and stores them to the local refs/remotes/origin/ namespace, unless the branch.<name>.fetch option is used to specify a non-default refspec. * Using refspecs explicitly: ``` $ git fetch origin +seen:seen maint:tmp ``` This updates (or creates, as necessary) branches `seen` and `tmp` in the local repository by fetching from the branches (respectively) `seen` and `maint` from the remote repository. The `seen` branch will be updated even if it does not fast-forward, because it is prefixed with a plus sign; `tmp` will not be. * Peek at a remote’s branch, without configuring the remote in your local repository: ``` $ git fetch git://git.kernel.org/pub/scm/git/git.git maint $ git log FETCH_HEAD ``` The first command fetches the `maint` branch from the repository at `git://git.kernel.org/pub/scm/git/git.git` and the second command uses `FETCH_HEAD` to examine the branch with [git-log[1]](git-log). The fetched objects will eventually be removed by git’s built-in housekeeping (see [git-gc[1]](git-gc)). Security -------- The fetch and push protocols are not designed to prevent one side from stealing data from the other repository that was not intended to be shared. If you have private data that you need to protect from a malicious peer, your best option is to store it in another repository. This applies to both clients and servers. In particular, namespaces on a server are not effective for read access control; you should only grant read access to a namespace to clients that you would trust with read access to the entire repository. The known attack vectors are as follows: 1. The victim sends "have" lines advertising the IDs of objects it has that are not explicitly intended to be shared but can be used to optimize the transfer if the peer also has them. The attacker chooses an object ID X to steal and sends a ref to X, but isn’t required to send the content of X because the victim already has it. Now the victim believes that the attacker has X, and it sends the content of X back to the attacker later. (This attack is most straightforward for a client to perform on a server, by creating a ref to X in the namespace the client has access to and then fetching it. The most likely way for a server to perform it on a client is to "merge" X into a public branch and hope that the user does additional work on this branch and pushes it back to the server without noticing the merge.) 2. As in #1, the attacker chooses an object ID X to steal. The victim sends an object Y that the attacker already has, and the attacker falsely claims to have X and not Y, so the victim sends Y as a delta against X. The delta reveals regions of X that are similar to Y to the attacker. Configuration ------------- Everything below this line in this section is selectively included from the [git-config[1]](git-config) documentation. The content is the same as what’s found there: fetch.recurseSubmodules This option controls whether `git fetch` (and the underlying fetch in `git pull`) will recursively fetch into populated submodules. This option can be set either to a boolean value or to `on-demand`. Setting it to a boolean changes the behavior of fetch and pull to recurse unconditionally into submodules when set to true or to not recurse at all when set to false. When set to `on-demand`, fetch and pull will only recurse into a populated submodule when its superproject retrieves a commit that updates the submodule’s reference. Defaults to `on-demand`, or to the value of `submodule.recurse` if set. fetch.fsckObjects If it is set to true, git-fetch-pack will check all fetched objects. See `transfer.fsckObjects` for what’s checked. Defaults to false. If not set, the value of `transfer.fsckObjects` is used instead. fetch.fsck.<msg-id> Acts like `fsck.<msg-id>`, but is used by [git-fetch-pack[1]](git-fetch-pack) instead of [git-fsck[1]](git-fsck). See the `fsck.<msg-id>` documentation for details. fetch.fsck.skipList Acts like `fsck.skipList`, but is used by [git-fetch-pack[1]](git-fetch-pack) instead of [git-fsck[1]](git-fsck). See the `fsck.skipList` documentation for details. fetch.unpackLimit If the number of objects fetched over the Git native transfer is below this limit, then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of `transfer.unpackLimit` is used instead. fetch.prune If true, fetch will automatically behave as if the `--prune` option was given on the command line. See also `remote.<name>.prune` and the PRUNING section of [git-fetch[1]](git-fetch). fetch.pruneTags If true, fetch will automatically behave as if the `refs/tags/*:refs/tags/*` refspec was provided when pruning, if not set already. This allows for setting both this option and `fetch.prune` to maintain a 1=1 mapping to upstream refs. See also `remote.<name>.pruneTags` and the PRUNING section of [git-fetch[1]](git-fetch). fetch.output Control how ref update status is printed. Valid values are `full` and `compact`. Default value is `full`. See section OUTPUT in [git-fetch[1]](git-fetch) for detail. fetch.negotiationAlgorithm Control how information about the commits in the local repository is sent when negotiating the contents of the packfile to be sent by the server. Set to "consecutive" to use an algorithm that walks over consecutive commits checking each one. Set to "skipping" to use an algorithm that skips commits in an effort to converge faster, but may result in a larger-than-necessary packfile; or set to "noop" to not send any information at all, which will almost certainly result in a larger-than-necessary packfile, but will skip the negotiation step. Set to "default" to override settings made previously and use the default behaviour. The default is normally "consecutive", but if `feature.experimental` is true, then the default is "skipping". Unknown values will cause `git fetch` to error out. See also the `--negotiate-only` and `--negotiation-tip` options to [git-fetch[1]](git-fetch). fetch.showForcedUpdates Set to false to enable `--no-show-forced-updates` in [git-fetch[1]](git-fetch) and [git-pull[1]](git-pull) commands. Defaults to true. fetch.parallel Specifies the maximal number of fetch operations to be run in parallel at a time (submodules, or remotes when the `--multiple` option of [git-fetch[1]](git-fetch) is in effect). A value of 0 will give some reasonable default. If unset, it defaults to 1. For submodules, this setting can be overridden using the `submodule.fetchJobs` config setting. fetch.writeCommitGraph Set to true to write a commit-graph after every `git fetch` command that downloads a pack-file from a remote. Using the `--split` option, most executions will create a very small commit-graph file on top of the existing commit-graph file(s). Occasionally, these files will merge and the write may take longer. Having an updated commit-graph file helps performance of many Git commands, including `git merge-base`, `git push -f`, and `git log --graph`. Defaults to false. Bugs ---- Using --recurse-submodules can only fetch new commits in submodules that are present locally e.g. in `$GIT_DIR/modules/`. If the upstream adds a new submodule, that submodule cannot be fetched until it is cloned e.g. by `git submodule update`. This is expected to be fixed in a future Git version. See also -------- [git-pull[1]](git-pull)
programming_docs
git git-cherry git-cherry ========== Name ---- git-cherry - Find commits yet to be applied to upstream Synopsis -------- ``` git cherry [-v] [<upstream> [<head> [<limit>]]] ``` Description ----------- Determine whether there are commits in `<head>..<upstream>` that are equivalent to those in the range `<limit>..<head>`. The equivalence test is based on the diff, after removing whitespace and line numbers. git-cherry therefore detects when commits have been "copied" by means of [git-cherry-pick[1]](git-cherry-pick), [git-am[1]](git-am) or [git-rebase[1]](git-rebase). Outputs the SHA1 of every commit in `<limit>..<head>`, prefixed with `-` for commits that have an equivalent in <upstream>, and `+` for commits that do not. Options ------- -v Show the commit subjects next to the SHA1s. <upstream> Upstream branch to search for equivalent commits. Defaults to the upstream branch of HEAD. <head> Working branch; defaults to HEAD. <limit> Do not report commits up to (and including) limit. Examples -------- ### Patch workflows git-cherry is frequently used in patch-based workflows (see [gitworkflows[7]](gitworkflows)) to determine if a series of patches has been applied by the upstream maintainer. In such a workflow you might create and send a topic branch like this: ``` $ git checkout -b topic origin/master # work and create some commits $ git format-patch origin/master $ git send-email ... 00* ``` Later, you can see whether your changes have been applied by saying (still on `topic`): ``` $ git fetch # update your notion of origin/master $ git cherry -v ``` ### Concrete example In a situation where topic consisted of three commits, and the maintainer applied two of them, the situation might look like: ``` $ git log --graph --oneline --decorate --boundary origin/master...topic * 7654321 (origin/master) upstream tip commit [... snip some other commits ...] * cccc111 cherry-pick of C * aaaa111 cherry-pick of A [... snip a lot more that has happened ...] | * cccc000 (topic) commit C | * bbbb000 commit B | * aaaa000 commit A |/ o 1234567 branch point ``` In such cases, git-cherry shows a concise summary of what has yet to be applied: ``` $ git cherry origin/master topic - cccc000... commit C + bbbb000... commit B - aaaa000... commit A ``` Here, we see that the commits A and C (marked with `-`) can be dropped from your `topic` branch when you rebase it on top of `origin/master`, while the commit B (marked with `+`) still needs to be kept so that it will be sent to be applied to `origin/master`. ### Using a limit The optional <limit> is useful in cases where your topic is based on other work that is not in upstream. Expanding on the previous example, this might look like: ``` $ git log --graph --oneline --decorate --boundary origin/master...topic * 7654321 (origin/master) upstream tip commit [... snip some other commits ...] * cccc111 cherry-pick of C * aaaa111 cherry-pick of A [... snip a lot more that has happened ...] | * cccc000 (topic) commit C | * bbbb000 commit B | * aaaa000 commit A | * 0000fff (base) unpublished stuff F [... snip ...] | * 0000aaa unpublished stuff A |/ o 1234567 merge-base between upstream and topic ``` By specifying `base` as the limit, you can avoid listing commits between `base` and `topic`: ``` $ git cherry origin/master topic base - cccc000... commit C + bbbb000... commit B - aaaa000... commit A ``` See also -------- [git-patch-id[1]](git-patch-id) git git-show-ref git-show-ref ============ Name ---- git-show-ref - List references in a local repository Synopsis -------- ``` git show-ref [-q | --quiet] [--verify] [--head] [-d | --dereference] [-s | --hash[=<n>]] [--abbrev[=<n>]] [--tags] [--heads] [--] [<pattern>…​] git show-ref --exclude-existing[=<pattern>] ``` Description ----------- Displays references available in a local repository along with the associated commit IDs. Results can be filtered using a pattern and tags can be dereferenced into object IDs. Additionally, it can be used to test whether a particular ref exists. By default, shows the tags, heads, and remote refs. The --exclude-existing form is a filter that does the inverse. It reads refs from stdin, one ref per line, and shows those that don’t exist in the local repository. Use of this utility is encouraged in favor of directly accessing files under the `.git` directory. Options ------- --head Show the HEAD reference, even if it would normally be filtered out. --heads --tags Limit to "refs/heads" and "refs/tags", respectively. These options are not mutually exclusive; when given both, references stored in "refs/heads" and "refs/tags" are displayed. -d --dereference Dereference tags into object IDs as well. They will be shown with "^{}" appended. -s --hash[=<n>] Only show the SHA-1 hash, not the reference name. When combined with --dereference the dereferenced tag will still be shown after the SHA-1. --verify Enable stricter reference checking by requiring an exact ref path. Aside from returning an error code of 1, it will also print an error message if `--quiet` was not specified. --abbrev[=<n>] Abbreviate the object name. When using `--hash`, you do not have to say `--hash --abbrev`; `--hash=n` would do. -q --quiet Do not print any results to stdout. When combined with `--verify` this can be used to silently check if a reference exists. --exclude-existing[=<pattern>] Make `git show-ref` act as a filter that reads refs from stdin of the form "`^(?:<anything>\s)?<refname>(?:\^{})?$`" and performs the following actions on each: (1) strip "^{}" at the end of line if any; (2) ignore if pattern is provided and does not head-match refname; (3) warn if refname is not a well-formed refname and skip; (4) ignore if refname is a ref that exists in the local repository; (5) otherwise output the line. <pattern>…​ Show references matching one or more patterns. Patterns are matched from the end of the full name, and only complete parts are matched, e.g. `master` matches `refs/heads/master`, `refs/remotes/origin/master`, `refs/tags/jedi/master` but not `refs/heads/mymaster` or `refs/remotes/master/jedi`. Output ------ The output is in the format: `<SHA-1 ID>` `<space>` `<reference name>`. ``` $ git show-ref --head --dereference 832e76a9899f560a90ffd62ae2ce83bbeff58f54 HEAD 832e76a9899f560a90ffd62ae2ce83bbeff58f54 refs/heads/master 832e76a9899f560a90ffd62ae2ce83bbeff58f54 refs/heads/origin 3521017556c5de4159da4615a39fa4d5d2c279b5 refs/tags/v0.99.9c 6ddc0964034342519a87fe013781abf31c6db6ad refs/tags/v0.99.9c^{} 055e4ae3ae6eb344cbabf2a5256a49ea66040131 refs/tags/v1.0rc4 423325a2d24638ddcc82ce47be5e40be550f4507 refs/tags/v1.0rc4^{} ... ``` When using --hash (and not --dereference) the output format is: `<SHA-1 ID>` ``` $ git show-ref --heads --hash 2e3ba0114a1f52b47df29743d6915d056be13278 185008ae97960c8d551adcd9e23565194651b5d1 03adf42c988195b50e1a1935ba5fcbc39b2b029b ... ``` Examples -------- To show all references called "master", whether tags or heads or anything else, and regardless of how deep in the reference naming hierarchy they are, use: ``` git show-ref master ``` This will show "refs/heads/master" but also "refs/remote/other-repo/master", if such references exists. When using the `--verify` flag, the command requires an exact path: ``` git show-ref --verify refs/heads/master ``` will only match the exact branch called "master". If nothing matches, `git show-ref` will return an error code of 1, and in the case of verification, it will show an error message. For scripting, you can ask it to be quiet with the "--quiet" flag, which allows you to do things like ``` git show-ref --quiet --verify -- "refs/heads/$headname" || echo "$headname is not a valid branch" ``` to check whether a particular branch exists or not (notice how we don’t actually want to show any results, and we want to use the full refname for it in order to not trigger the problem with ambiguous partial matches). To show only tags, or only proper branch heads, use "--tags" and/or "--heads" respectively (using both means that it shows tags and heads, but not other random references under the refs/ subdirectory). To do automatic tag object dereferencing, use the "-d" or "--dereference" flag, so you can do ``` git show-ref --tags --dereference ``` to get a listing of all tags together with what they dereference. Files ----- `.git/refs/*`, `.git/packed-refs` See also -------- [git-for-each-ref[1]](git-for-each-ref), [git-ls-remote[1]](git-ls-remote), [git-update-ref[1]](git-update-ref), [gitrepository-layout[5]](gitrepository-layout) git git-for-each-repo git-for-each-repo ================= Name ---- git-for-each-repo - Run a Git command on a list of repositories Synopsis -------- ``` git for-each-repo --config=<config> [--] <arguments> ``` Description ----------- Run a Git command on a list of repositories. The arguments after the known options or `--` indicator are used as the arguments for the Git subprocess. THIS COMMAND IS EXPERIMENTAL. THE BEHAVIOR MAY CHANGE. For example, we could run maintenance on each of a list of repositories stored in a `maintenance.repo` config variable using ``` git for-each-repo --config=maintenance.repo maintenance run ``` This will run `git -C <repo> maintenance run` for each value `<repo>` in the multi-valued config variable `maintenance.repo`. Options ------- --config=<config> Use the given config variable as a multi-valued list storing absolute path names. Iterate on that list of paths to run the given arguments. These config values are loaded from system, global, and local Git config, as available. If `git for-each-repo` is run in a directory that is not a Git repository, then only the system and global config is used. Subprocess behavior ------------------- If any `git -C <repo> <arguments>` subprocess returns a non-zero exit code, then the `git for-each-repo` process returns that exit code without running more subprocesses. Each `git -C <repo> <arguments>` subprocess inherits the standard file descriptors `stdin`, `stdout`, and `stderr`. git gitformat-pack gitformat-pack ============== Name ---- gitformat-pack - Git pack format Synopsis -------- ``` $GIT_DIR/objects/pack/pack-.{pack,idx} $GIT_DIR/objects/pack/pack-.rev $GIT_DIR/objects/pack/pack-*.mtimes $GIT_DIR/objects/pack/multi-pack-index ``` Description ----------- The Git pack format is now Git stores most of its primary repository data. Over the lietime af a repository loose objects (if any) and smaller packs are consolidated into larger pack(s). See [git-gc[1]](git-gc) and [git-pack-objects[1]](git-pack-objects). The pack format is also used over-the-wire, see e.g. [gitprotocol-v2[5]](gitprotocol-v2), as well as being a part of other container formats in the case of [gitformat-bundle[5]](gitformat-bundle). Checksums and object ids ------------------------ In a repository using the traditional SHA-1, pack checksums, index checksums, and object IDs (object names) mentioned below are all computed using SHA-1. Similarly, in SHA-256 repositories, these values are computed using SHA-256. Pack-\*.pack files have the following format: --------------------------------------------- * A header appears at the beginning and consists of the following: ``` 4-byte signature: The signature is: {'P', 'A', 'C', 'K'} ``` ``` 4-byte version number (network byte order): Git currently accepts version number 2 or 3 but generates version 2 only. ``` ``` 4-byte number of objects contained in the pack (network byte order) ``` ``` Observation: we cannot have more than 4G versions ;-) and more than 4G objects in a pack. ``` * The header is followed by number of object entries, each of which looks like this: ``` (undeltified representation) n-byte type and length (3-bit type, (n-1)*7+4-bit length) compressed data ``` ``` (deltified representation) n-byte type and length (3-bit type, (n-1)*7+4-bit length) base object name if OBJ_REF_DELTA or a negative relative offset from the delta object's position in the pack if this is an OBJ_OFS_DELTA object compressed delta data ``` ``` Observation: length of each object is encoded in a variable length format and is not constrained to 32-bit or anything. ``` * The trailer records a pack checksum of all of the above. ### Object types Valid object types are: * OBJ\_COMMIT (1) * OBJ\_TREE (2) * OBJ\_BLOB (3) * OBJ\_TAG (4) * OBJ\_OFS\_DELTA (6) * OBJ\_REF\_DELTA (7) Type 5 is reserved for future expansion. Type 0 is invalid. ### Size encoding This document uses the following "size encoding" of non-negative integers: From each byte, the seven least significant bits are used to form the resulting integer. As long as the most significant bit is 1, this process continues; the byte with MSB 0 provides the last seven bits. The seven-bit chunks are concatenated. Later values are more significant. This size encoding should not be confused with the "offset encoding", which is also used in this document. ### Deltified representation Conceptually there are only four object types: commit, tree, tag and blob. However to save space, an object could be stored as a "delta" of another "base" object. These representations are assigned new types ofs-delta and ref-delta, which is only valid in a pack file. Both ofs-delta and ref-delta store the "delta" to be applied to another object (called `base object`) to reconstruct the object. The difference between them is, ref-delta directly encodes base object name. If the base object is in the same pack, ofs-delta encodes the offset of the base object in the pack instead. The base object could also be deltified if it’s in the same pack. Ref-delta can also refer to an object outside the pack (i.e. the so-called "thin pack"). When stored on disk however, the pack should be self contained to avoid cyclic dependency. The delta data starts with the size of the base object and the size of the object to be reconstructed. These sizes are encoded using the size encoding from above. The remainder of the delta data is a sequence of instructions to reconstruct the object from the base object. If the base object is deltified, it must be converted to canonical form first. Each instruction appends more and more data to the target object until it’s complete. There are two supported instructions so far: one for copy a byte range from the source object and one for inserting new data embedded in the instruction itself. Each instruction has variable length. Instruction type is determined by the seventh bit of the first octet. The following diagrams follow the convention in RFC 1951 (Deflate compressed data format). #### Instruction to copy from base object ``` +----------+---------+---------+---------+---------+-------+-------+-------+ | 1xxxxxxx | offset1 | offset2 | offset3 | offset4 | size1 | size2 | size3 | +----------+---------+---------+---------+---------+-------+-------+-------+ ``` This is the instruction format to copy a byte range from the source object. It encodes the offset to copy from and the number of bytes to copy. Offset and size are in little-endian order. All offset and size bytes are optional. This is to reduce the instruction size when encoding small offsets or sizes. The first seven bits in the first octet determines which of the next seven octets is present. If bit zero is set, offset1 is present. If bit one is set offset2 is present and so on. Note that a more compact instruction does not change offset and size encoding. For example, if only offset2 is omitted like below, offset3 still contains bits 16-23. It does not become offset2 and contains bits 8-15 even if it’s right next to offset1. ``` +----------+---------+---------+ | 10000101 | offset1 | offset3 | +----------+---------+---------+ ``` In its most compact form, this instruction only takes up one byte (0x80) with both offset and size omitted, which will have default values zero. There is another exception: size zero is automatically converted to 0x10000. #### Instruction to add new data ``` +----------+============+ | 0xxxxxxx | data | +----------+============+ ``` This is the instruction to construct target object without the base object. The following data is appended to the target object. The first seven bits of the first octet determines the size of data in bytes. The size must be non-zero. #### Reserved instruction ``` +----------+============ | 00000000 | +----------+============ ``` This is the instruction reserved for future expansion. Original (version 1) pack-\*.idx files have the following format: ----------------------------------------------------------------- * The header consists of 256 4-byte network byte order integers. N-th entry of this table records the number of objects in the corresponding pack, the first byte of whose object name is less than or equal to N. This is called the `first-level fan-out` table. * The header is followed by sorted 24-byte entries, one entry per object in the pack. Each entry is: ``` 4-byte network byte order integer, recording where the object is stored in the packfile as the offset from the beginning. ``` ``` one object name of the appropriate size. ``` * The file is concluded with a trailer: ``` A copy of the pack checksum at the end of the corresponding packfile. ``` ``` Index checksum of all of the above. ``` Pack Idx file: ``` -- +--------------------------------+ fanout | fanout[0] = 2 (for example) |-. table +--------------------------------+ | | fanout[1] | | +--------------------------------+ | | fanout[2] | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | fanout[255] = total objects |---. -- +--------------------------------+ | | main | offset | | | index | object name 00XXXXXXXXXXXXXXXX | | | table +--------------------------------+ | | | offset | | | | object name 00XXXXXXXXXXXXXXXX | | | +--------------------------------+<+ | .-| offset | | | | object name 01XXXXXXXXXXXXXXXX | | | +--------------------------------+ | | | offset | | | | object name 01XXXXXXXXXXXXXXXX | | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | offset | | | | object name FFXXXXXXXXXXXXXXXX | | --| +--------------------------------+<--+ trailer | | packfile checksum | | +--------------------------------+ | | idxfile checksum | | +--------------------------------+ .-------. | Pack file entry: <+ ``` ``` packed object header: 1-byte size extension bit (MSB) type (next 3 bit) size0 (lower 4-bit) n-byte sizeN (as long as MSB is set, each 7-bit) size0..sizeN form 4+7+7+..+7 bit integer, size0 is the least significant part, and sizeN is the most significant part. packed object data: If it is not DELTA, then deflated bytes (the size above is the size before compression). If it is REF_DELTA, then base object name (the size above is the size of the delta data that follows). delta data, deflated. If it is OFS_DELTA, then n-byte offset (see below) interpreted as a negative offset from the type-byte of the header of the ofs-delta entry (the size above is the size of the delta data that follows). delta data, deflated. ``` ``` offset encoding: n bytes with MSB set in all but the last one. The offset is then the number constructed by concatenating the lower 7 bit of each byte, and for n >= 2 adding 2^7 + 2^14 + ... + 2^(7*(n-1)) to the result. ``` Version 2 pack-\*.idx files support packs larger than 4 gib, and ---------------------------------------------------------------- ``` have some other reorganizations. They have the format: ``` * A 4-byte magic number `\377tOc` which is an unreasonable fanout[0] value. * A 4-byte version number (= 2) * A 256-entry fan-out table just like v1. * A table of sorted object names. These are packed together without offset values to reduce the cache footprint of the binary search for a specific object name. * A table of 4-byte CRC32 values of the packed object data. This is new in v2 so compressed data can be copied directly from pack to pack during repacking without undetected data corruption. * A table of 4-byte offset values (in network byte order). These are usually 31-bit pack file offsets, but large offsets are encoded as an index into the next table with the msbit set. * A table of 8-byte offset entries (empty for pack files less than 2 GiB). Pack files are organized with heavily used objects toward the front, so most object references should not need to refer to this table. * The same trailer as a v1 pack file: ``` A copy of the pack checksum at the end of corresponding packfile. ``` ``` Index checksum of all of the above. ``` Pack-\*.rev files have the format: ---------------------------------- * A 4-byte magic number `0x52494458` (`RIDX`). * A 4-byte version identifier (= 1). * A 4-byte hash function identifier (= 1 for SHA-1, 2 for SHA-256). * A table of index positions (one per packed object, num\_objects in total, each a 4-byte unsigned integer in network order), sorted by their corresponding offsets in the packfile. * A trailer, containing a: ``` checksum of the corresponding packfile, and ``` ``` a checksum of all of the above. ``` All 4-byte numbers are in network order. Pack-\*.mtimes files have the format: ------------------------------------- All 4-byte numbers are in network byte order. * A 4-byte magic number `0x4d544d45` (`MTME`). * A 4-byte version identifier (= 1). * A 4-byte hash function identifier (= 1 for SHA-1, 2 for SHA-256). * A table of 4-byte unsigned integers. The ith value is the modification time (mtime) of the ith object in the corresponding pack by lexicographic (index) order. The mtimes count standard epoch seconds. * A trailer, containing a checksum of the corresponding packfile, and a checksum of all of the above (each having length according to the specified hash function). Multi-pack-index (midx) files have the following format: -------------------------------------------------------- The multi-pack-index files refer to multiple pack-files and loose objects. In order to allow extensions that add extra data to the MIDX, we organize the body into "chunks" and provide a lookup table at the beginning of the body. The header includes certain length values, such as the number of packs, the number of base MIDX files, hash lengths and types. All 4-byte numbers are in network order. HEADER: ``` 4-byte signature: The signature is: {'M', 'I', 'D', 'X'} ``` ``` 1-byte version number: Git only writes or recognizes version 1. ``` ``` 1-byte Object Id Version We infer the length of object IDs (OIDs) from this value: 1 => SHA-1 2 => SHA-256 If the hash type does not match the repository's hash algorithm, the multi-pack-index file should be ignored with a warning presented to the user. ``` ``` 1-byte number of "chunks" ``` ``` 1-byte number of base multi-pack-index files: This value is currently always zero. ``` ``` 4-byte number of pack files ``` CHUNK LOOKUP: ``` (C + 1) * 12 bytes providing the chunk offsets: First 4 bytes describe chunk id. Value 0 is a terminating label. Other 8 bytes provide offset in current file for chunk to start. (Chunks are provided in file-order, so you can infer the length using the next chunk position if necessary.) ``` ``` The CHUNK LOOKUP matches the table of contents from the chunk-based file format, see gitformat-chunk[5]. ``` ``` The remaining data in the body is described one chunk at a time, and these chunks may be given in any order. Chunks are required unless otherwise specified. ``` CHUNK DATA: ``` Packfile Names (ID: {'P', 'N', 'A', 'M'}) Stores the packfile names as concatenated, null-terminated strings. Packfiles must be listed in lexicographic order for fast lookups by name. This is the only chunk not guaranteed to be a multiple of four bytes in length, so should be the last chunk for alignment reasons. ``` ``` OID Fanout (ID: {'O', 'I', 'D', 'F'}) The ith entry, F[i], stores the number of OIDs with first byte at most i. Thus F[255] stores the total number of objects. ``` ``` OID Lookup (ID: {'O', 'I', 'D', 'L'}) The OIDs for all objects in the MIDX are stored in lexicographic order in this chunk. ``` ``` Object Offsets (ID: {'O', 'O', 'F', 'F'}) Stores two 4-byte values for every object. 1: The pack-int-id for the pack storing this object. 2: The offset within the pack. If all offsets are less than 2^32, then the large offset chunk will not exist and offsets are stored as in IDX v1. If there is at least one offset value larger than 2^32-1, then the large offset chunk must exist, and offsets larger than 2^31-1 must be stored in it instead. If the large offset chunk exists and the 31st bit is on, then removing that bit reveals the row in the large offsets containing the 8-byte offset of this object. ``` ``` [Optional] Object Large Offsets (ID: {'L', 'O', 'F', 'F'}) 8-byte offsets into large packfiles. ``` ``` [Optional] Bitmap pack order (ID: {'R', 'I', 'D', 'X'}) A list of MIDX positions (one per object in the MIDX, num_objects in total, each a 4-byte unsigned integer in network byte order), sorted according to their relative bitmap/pseudo-pack positions. ``` TRAILER: ``` Index checksum of the above contents. ``` Multi-pack-index reverse indexes -------------------------------- Similar to the pack-based reverse index, the multi-pack index can also be used to generate a reverse index. Instead of mapping between offset, pack-, and index position, this reverse index maps between an object’s position within the MIDX, and that object’s position within a pseudo-pack that the MIDX describes (i.e., the ith entry of the multi-pack reverse index holds the MIDX position of ith object in pseudo-pack order). To clarify the difference between these orderings, consider a multi-pack reachability bitmap (which does not yet exist, but is what we are building towards here). Each bit needs to correspond to an object in the MIDX, and so we need an efficient mapping from bit position to MIDX position. One solution is to let bits occupy the same position in the oid-sorted index stored by the MIDX. But because oids are effectively random, their resulting reachability bitmaps would have no locality, and thus compress poorly. (This is the reason that single-pack bitmaps use the pack ordering, and not the .idx ordering, for the same purpose.) So we’d like to define an ordering for the whole MIDX based around pack ordering, which has far better locality (and thus compresses more efficiently). We can think of a pseudo-pack created by the concatenation of all of the packs in the MIDX. E.g., if we had a MIDX with three packs (a, b, c), with 10, 15, and 20 objects respectively, we can imagine an ordering of the objects like: ``` |a,0|a,1|...|a,9|b,0|b,1|...|b,14|c,0|c,1|...|c,19| ``` where the ordering of the packs is defined by the MIDX’s pack list, and then the ordering of objects within each pack is the same as the order in the actual packfile. Given the list of packs and their counts of objects, you can naïvely reconstruct that pseudo-pack ordering (e.g., the object at position 27 must be (c,1) because packs "a" and "b" consumed 25 of the slots). But there’s a catch. Objects may be duplicated between packs, in which case the MIDX only stores one pointer to the object (and thus we’d want only one slot in the bitmap). Callers could handle duplicates themselves by reading objects in order of their bit-position, but that’s linear in the number of objects, and much too expensive for ordinary bitmap lookups. Building a reverse index solves this, since it is the logical inverse of the index, and that index has already removed duplicates. But, building a reverse index on the fly can be expensive. Since we already have an on-disk format for pack-based reverse indexes, let’s reuse it for the MIDX’s pseudo-pack, too. Objects from the MIDX are ordered as follows to string together the pseudo-pack. Let `pack(o)` return the pack from which `o` was selected by the MIDX, and define an ordering of packs based on their numeric ID (as stored by the MIDX). Let `offset(o)` return the object offset of `o` within `pack(o)`. Then, compare `o1` and `o2` as follows: * If one of `pack(o1)` and `pack(o2)` is preferred and the other is not, then the preferred one sorts first. (This is a detail that allows the MIDX bitmap to determine which pack should be used by the pack-reuse mechanism, since it can ask the MIDX for the pack containing the object at bit position 0). * If `pack(o1) ≠ pack(o2)`, then sort the two objects in descending order based on the pack ID. * Otherwise, `pack(o1) = pack(o2)`, and the objects are sorted in pack-order (i.e., `o1` sorts ahead of `o2` exactly when `offset(o1) < offset(o2)`). In short, a MIDX’s pseudo-pack is the de-duplicated concatenation of objects in packs stored by the MIDX, laid out in pack order, and the packs arranged in MIDX order (with the preferred pack coming first). The MIDX’s reverse index is stored in the optional `RIDX` chunk within the MIDX itself. Cruft packs ----------- The cruft packs feature offer an alternative to Git’s traditional mechanism of removing unreachable objects. This document provides an overview of Git’s pruning mechanism, and how a cruft pack can be used instead to accomplish the same. ### Background To remove unreachable objects from your repository, Git offers `git repack -Ad` (see [git-repack[1]](git-repack)). Quoting from the documentation: ``` [...] unreachable objects in a previous pack become loose, unpacked objects, instead of being left in the old pack. [...] loose unreachable objects will be pruned according to normal expiry rules with the next 'git gc' invocation. ``` Unreachable objects aren’t removed immediately, since doing so could race with an incoming push which may reference an object which is about to be deleted. Instead, those unreachable objects are stored as loose objects and stay that way until they are older than the expiration window, at which point they are removed by [git-prune[1]](git-prune). Git must store these unreachable objects loose in order to keep track of their per-object mtimes. If these unreachable objects were written into one big pack, then either freshening that pack (because an object contained within it was re-written) or creating a new pack of unreachable objects would cause the pack’s mtime to get updated, and the objects within it would never leave the expiration window. Instead, objects are stored loose in order to keep track of the individual object mtimes and avoid a situation where all cruft objects are freshened at once. This can lead to undesirable situations when a repository contains many unreachable objects which have not yet left the grace period. Having large directories in the shards of `.git/objects` can lead to decreased performance in the repository. But given enough unreachable objects, this can lead to inode starvation and degrade the performance of the whole system. Since we can never pack those objects, these repositories often take up a large amount of disk space, since we can only zlib compress them, but not store them in delta chains. ### Cruft packs A cruft pack eliminates the need for storing unreachable objects in a loose state by including the per-object mtimes in a separate file alongside a single pack containing all loose objects. A cruft pack is written by `git repack --cruft` when generating a new pack. [git-pack-objects[1]](git-pack-objects)'s `--cruft` option. Note that `git repack --cruft` is a classic all-into-one repack, meaning that everything in the resulting pack is reachable, and everything else is unreachable. Once written, the `--cruft` option instructs `git repack` to generate another pack containing only objects not packed in the previous step (which equates to packing all unreachable objects together). This progresses as follows: 1. Enumerate every object, marking any object which is (a) not contained in a kept-pack, and (b) whose mtime is within the grace period as a traversal tip. 2. Perform a reachability traversal based on the tips gathered in the previous step, adding every object along the way to the pack. 3. Write the pack out, along with a `.mtimes` file that records the per-object timestamps. This mode is invoked internally by [git-repack[1]](git-repack) when instructed to write a cruft pack. Crucially, the set of in-core kept packs is exactly the set of packs which will not be deleted by the repack; in other words, they contain all of the repository’s reachable objects. When a repository already has a cruft pack, `git repack --cruft` typically only adds objects to it. An exception to this is when `git repack` is given the `--cruft-expiration` option, which allows the generated cruft pack to omit expired objects instead of waiting for [git-gc[1]](git-gc) to expire those objects later on. It is [git-gc[1]](git-gc) that is typically responsible for removing expired unreachable objects. ### Caution for mixed-version environments Repositories that have cruft packs in them will continue to work with any older version of Git. Note, however, that previous versions of Git which do not understand the `.mtimes` file will use the cruft pack’s mtime as the mtime for all of the objects in it. In other words, do not expect older (pre-cruft pack) versions of Git to interpret or even read the contents of the `.mtimes` file. Note that having mixed versions of Git GC-ing the same repository can lead to unreachable objects never being completely pruned. This can happen under the following circumstances: * An older version of Git running GC explodes the contents of an existing cruft pack loose, using the cruft pack’s mtime. * A newer version running GC collects those loose objects into a cruft pack, where the .mtime file reflects the loose object’s actual mtimes, but the cruft pack mtime is "now". Repeating this process will lead to unreachable objects not getting pruned as a result of repeatedly resetting the objects' mtimes to the present time. If you are GC-ing repositories in a mixed version environment, consider omitting the `--cruft` option when using [git-repack[1]](git-repack) and [git-gc[1]](git-gc), and leaving the `gc.cruftPacks` configuration unset until all writers understand cruft packs. ### Alternatives Notable alternatives to this design include: * The location of the per-object mtime data, and * Storing unreachable objects in multiple cruft packs. On the location of mtime data, a new auxiliary file tied to the pack was chosen to avoid complicating the `.idx` format. If the `.idx` format were ever to gain support for optional chunks of data, it may make sense to consolidate the `.mtimes` format into the `.idx` itself. Storing unreachable objects among multiple cruft packs (e.g., creating a new cruft pack during each repacking operation including only unreachable objects which aren’t already stored in an earlier cruft pack) is significantly more complicated to construct, and so aren’t pursued here. The obvious drawback to the current implementation is that the entire cruft pack must be re-written from scratch.
programming_docs
git githooks githooks ======== Name ---- githooks - Hooks used by Git Synopsis -------- $GIT\_DIR/hooks/\* (or `git config core.hooksPath`/\*) Description ----------- Hooks are programs you can place in a hooks directory to trigger actions at certain points in git’s execution. Hooks that don’t have the executable bit set are ignored. By default the hooks directory is `$GIT_DIR/hooks`, but that can be changed via the `core.hooksPath` configuration variable (see [git-config[1]](git-config)). Before Git invokes a hook, it changes its working directory to either $GIT\_DIR in a bare repository or the root of the working tree in a non-bare repository. An exception are hooks triggered during a push (`pre-receive`, `update`, `post-receive`, `post-update`, `push-to-checkout`) which are always executed in $GIT\_DIR. Hooks can get their arguments via the environment, command-line arguments, and stdin. See the documentation for each hook below for details. `git init` may copy hooks to the new repository, depending on its configuration. See the "TEMPLATE DIRECTORY" section in [git-init[1]](git-init) for details. When the rest of this document refers to "default hooks" it’s talking about the default template shipped with Git. The currently supported hooks are described below. Hooks ----- ### applypatch-msg This hook is invoked by [git-am[1]](git-am). It takes a single parameter, the name of the file that holds the proposed commit log message. Exiting with a non-zero status causes `git am` to abort before applying the patch. The hook is allowed to edit the message file in place, and can be used to normalize the message into some project standard format. It can also be used to refuse the commit after inspecting the message file. The default `applypatch-msg` hook, when enabled, runs the `commit-msg` hook, if the latter is enabled. ### pre-applypatch This hook is invoked by [git-am[1]](git-am). It takes no parameter, and is invoked after the patch is applied, but before a commit is made. If it exits with non-zero status, then the working tree will not be committed after applying the patch. It can be used to inspect the current working tree and refuse to make a commit if it does not pass certain test. The default `pre-applypatch` hook, when enabled, runs the `pre-commit` hook, if the latter is enabled. ### post-applypatch This hook is invoked by [git-am[1]](git-am). It takes no parameter, and is invoked after the patch is applied and a commit is made. This hook is meant primarily for notification, and cannot affect the outcome of `git am`. ### pre-commit This hook is invoked by [git-commit[1]](git-commit), and can be bypassed with the `--no-verify` option. It takes no parameters, and is invoked before obtaining the proposed commit log message and making a commit. Exiting with a non-zero status from this script causes the `git commit` command to abort before creating a commit. The default `pre-commit` hook, when enabled, catches introduction of lines with trailing whitespaces and aborts the commit when such a line is found. All the `git commit` hooks are invoked with the environment variable `GIT_EDITOR=:` if the command will not bring up an editor to modify the commit message. The default `pre-commit` hook, when enabled—​and with the `hooks.allownonascii` config option unset or set to false—​prevents the use of non-ASCII filenames. ### pre-merge-commit This hook is invoked by [git-merge[1]](git-merge), and can be bypassed with the `--no-verify` option. It takes no parameters, and is invoked after the merge has been carried out successfully and before obtaining the proposed commit log message to make a commit. Exiting with a non-zero status from this script causes the `git merge` command to abort before creating a commit. The default `pre-merge-commit` hook, when enabled, runs the `pre-commit` hook, if the latter is enabled. This hook is invoked with the environment variable `GIT_EDITOR=:` if the command will not bring up an editor to modify the commit message. If the merge cannot be carried out automatically, the conflicts need to be resolved and the result committed separately (see [git-merge[1]](git-merge)). At that point, this hook will not be executed, but the `pre-commit` hook will, if it is enabled. ### prepare-commit-msg This hook is invoked by [git-commit[1]](git-commit) right after preparing the default log message, and before the editor is started. It takes one to three parameters. The first is the name of the file that contains the commit log message. The second is the source of the commit message, and can be: `message` (if a `-m` or `-F` option was given); `template` (if a `-t` option was given or the configuration option `commit.template` is set); `merge` (if the commit is a merge or a `.git/MERGE_MSG` file exists); `squash` (if a `.git/SQUASH_MSG` file exists); or `commit`, followed by a commit object name (if a `-c`, `-C` or `--amend` option was given). If the exit status is non-zero, `git commit` will abort. The purpose of the hook is to edit the message file in place, and it is not suppressed by the `--no-verify` option. A non-zero exit means a failure of the hook and aborts the commit. It should not be used as replacement for pre-commit hook. The sample `prepare-commit-msg` hook that comes with Git removes the help message found in the commented portion of the commit template. ### commit-msg This hook is invoked by [git-commit[1]](git-commit) and [git-merge[1]](git-merge), and can be bypassed with the `--no-verify` option. It takes a single parameter, the name of the file that holds the proposed commit log message. Exiting with a non-zero status causes the command to abort. The hook is allowed to edit the message file in place, and can be used to normalize the message into some project standard format. It can also be used to refuse the commit after inspecting the message file. The default `commit-msg` hook, when enabled, detects duplicate `Signed-off-by` trailers, and aborts the commit if one is found. ### post-commit This hook is invoked by [git-commit[1]](git-commit). It takes no parameters, and is invoked after a commit is made. This hook is meant primarily for notification, and cannot affect the outcome of `git commit`. ### pre-rebase This hook is called by [git-rebase[1]](git-rebase) and can be used to prevent a branch from getting rebased. The hook may be called with one or two parameters. The first parameter is the upstream from which the series was forked. The second parameter is the branch being rebased, and is not set when rebasing the current branch. ### post-checkout This hook is invoked when a [git-checkout[1]](git-checkout) or [git-switch[1]](git-switch) is run after having updated the worktree. The hook is given three parameters: the ref of the previous HEAD, the ref of the new HEAD (which may or may not have changed), and a flag indicating whether the checkout was a branch checkout (changing branches, flag=1) or a file checkout (retrieving a file from the index, flag=0). This hook cannot affect the outcome of `git switch` or `git checkout`, other than that the hook’s exit status becomes the exit status of these two commands. It is also run after [git-clone[1]](git-clone), unless the `--no-checkout` (`-n`) option is used. The first parameter given to the hook is the null-ref, the second the ref of the new HEAD and the flag is always 1. Likewise for `git worktree add` unless `--no-checkout` is used. This hook can be used to perform repository validity checks, auto-display differences from the previous HEAD if different, or set working dir metadata properties. ### post-merge This hook is invoked by [git-merge[1]](git-merge), which happens when a `git pull` is done on a local repository. The hook takes a single parameter, a status flag specifying whether or not the merge being done was a squash merge. This hook cannot affect the outcome of `git merge` and is not executed, if the merge failed due to conflicts. This hook can be used in conjunction with a corresponding pre-commit hook to save and restore any form of metadata associated with the working tree (e.g.: permissions/ownership, ACLS, etc). See contrib/hooks/setgitperms.perl for an example of how to do this. ### pre-push This hook is called by [git-push[1]](git-push) and can be used to prevent a push from taking place. The hook is called with two parameters which provide the name and location of the destination remote, if a named remote is not being used both values will be the same. Information about what is to be pushed is provided on the hook’s standard input with lines of the form: ``` <local ref> SP <local object name> SP <remote ref> SP <remote object name> LF ``` For instance, if the command `git push origin master:foreign` were run the hook would receive a line like the following: ``` refs/heads/master 67890 refs/heads/foreign 12345 ``` although the full object name would be supplied. If the foreign ref does not yet exist the `<remote object name>` will be the all-zeroes object name. If a ref is to be deleted, the `<local ref>` will be supplied as `(delete)` and the `<local object name>` will be the all-zeroes object name. If the local commit was specified by something other than a name which could be expanded (such as `HEAD~`, or an object name) it will be supplied as it was originally given. If this hook exits with a non-zero status, `git push` will abort without pushing anything. Information about why the push is rejected may be sent to the user by writing to standard error. ### pre-receive This hook is invoked by [git-receive-pack[1]](git-receive-pack) when it reacts to `git push` and updates reference(s) in its repository. Just before starting to update refs on the remote repository, the pre-receive hook is invoked. Its exit status determines the success or failure of the update. This hook executes once for the receive operation. It takes no arguments, but for each ref to be updated it receives on standard input a line of the format: ``` <old-value> SP <new-value> SP <ref-name> LF ``` where `<old-value>` is the old object name stored in the ref, `<new-value>` is the new object name to be stored in the ref and `<ref-name>` is the full name of the ref. When creating a new ref, `<old-value>` is the all-zeroes object name. If the hook exits with non-zero status, none of the refs will be updated. If the hook exits with zero, updating of individual refs can still be prevented by the [*update*](#update) hook. Both standard output and standard error output are forwarded to `git send-pack` on the other end, so you can simply `echo` messages for the user. The number of push options given on the command line of `git push --push-option=...` can be read from the environment variable `GIT_PUSH_OPTION_COUNT`, and the options themselves are found in `GIT_PUSH_OPTION_0`, `GIT_PUSH_OPTION_1`,…​ If it is negotiated to not use the push options phase, the environment variables will not be set. If the client selects to use push options, but doesn’t transmit any, the count variable will be set to zero, `GIT_PUSH_OPTION_COUNT=0`. See the section on "Quarantine Environment" in [git-receive-pack[1]](git-receive-pack) for some caveats. ### update This hook is invoked by [git-receive-pack[1]](git-receive-pack) when it reacts to `git push` and updates reference(s) in its repository. Just before updating the ref on the remote repository, the update hook is invoked. Its exit status determines the success or failure of the ref update. The hook executes once for each ref to be updated, and takes three parameters: * the name of the ref being updated, * the old object name stored in the ref, * and the new object name to be stored in the ref. A zero exit from the update hook allows the ref to be updated. Exiting with a non-zero status prevents `git receive-pack` from updating that ref. This hook can be used to prevent `forced` update on certain refs by making sure that the object name is a commit object that is a descendant of the commit object named by the old object name. That is, to enforce a "fast-forward only" policy. It could also be used to log the old..new status. However, it does not know the entire set of branches, so it would end up firing one e-mail per ref when used naively, though. The [*post-receive*](#post-receive) hook is more suited to that. In an environment that restricts the users' access only to git commands over the wire, this hook can be used to implement access control without relying on filesystem ownership and group membership. See [git-shell[1]](git-shell) for how you might use the login shell to restrict the user’s access to only git commands. Both standard output and standard error output are forwarded to `git send-pack` on the other end, so you can simply `echo` messages for the user. The default `update` hook, when enabled—​and with `hooks.allowunannotated` config option unset or set to false—​prevents unannotated tags to be pushed. ### proc-receive This hook is invoked by [git-receive-pack[1]](git-receive-pack). If the server has set the multi-valued config variable `receive.procReceiveRefs`, and the commands sent to `receive-pack` have matching reference names, these commands will be executed by this hook, instead of by the internal `execute_commands()` function. This hook is responsible for updating the relevant references and reporting the results back to `receive-pack`. This hook executes once for the receive operation. It takes no arguments, but uses a pkt-line format protocol to communicate with `receive-pack` to read commands, push-options and send results. In the following example for the protocol, the letter `S` stands for `receive-pack` and the letter `H` stands for this hook. ``` # Version and features negotiation. S: PKT-LINE(version=1\0push-options atomic...) S: flush-pkt H: PKT-LINE(version=1\0push-options...) H: flush-pkt ``` ``` # Send commands from server to the hook. S: PKT-LINE(<old-oid> <new-oid> <ref>) S: ... ... S: flush-pkt # Send push-options only if the 'push-options' feature is enabled. S: PKT-LINE(push-option) S: ... ... S: flush-pkt ``` ``` # Receive result from the hook. # OK, run this command successfully. H: PKT-LINE(ok <ref>) # NO, I reject it. H: PKT-LINE(ng <ref> <reason>) # Fall through, let 'receive-pack' to execute it. H: PKT-LINE(ok <ref>) H: PKT-LINE(option fall-through) # OK, but has an alternate reference. The alternate reference name # and other status can be given in option directives. H: PKT-LINE(ok <ref>) H: PKT-LINE(option refname <refname>) H: PKT-LINE(option old-oid <old-oid>) H: PKT-LINE(option new-oid <new-oid>) H: PKT-LINE(option forced-update) H: ... ... H: flush-pkt ``` Each command for the `proc-receive` hook may point to a pseudo-reference and always has a zero-old as its old-oid, while the `proc-receive` hook may update an alternate reference and the alternate reference may exist already with a non-zero old-oid. For this case, this hook will use "option" directives to report extended attributes for the reference given by the leading "ok" directive. The report of the commands of this hook should have the same order as the input. The exit status of the `proc-receive` hook only determines the success or failure of the group of commands sent to it, unless atomic push is in use. ### post-receive This hook is invoked by [git-receive-pack[1]](git-receive-pack) when it reacts to `git push` and updates reference(s) in its repository. It executes on the remote repository once after all the refs have been updated. This hook executes once for the receive operation. It takes no arguments, but gets the same information as the [*pre-receive*](#pre-receive) hook does on its standard input. This hook does not affect the outcome of `git receive-pack`, as it is called after the real work is done. This supersedes the [*post-update*](#post-update) hook in that it gets both old and new values of all the refs in addition to their names. Both standard output and standard error output are forwarded to `git send-pack` on the other end, so you can simply `echo` messages for the user. The default `post-receive` hook is empty, but there is a sample script `post-receive-email` provided in the `contrib/hooks` directory in Git distribution, which implements sending commit emails. The number of push options given on the command line of `git push --push-option=...` can be read from the environment variable `GIT_PUSH_OPTION_COUNT`, and the options themselves are found in `GIT_PUSH_OPTION_0`, `GIT_PUSH_OPTION_1`,…​ If it is negotiated to not use the push options phase, the environment variables will not be set. If the client selects to use push options, but doesn’t transmit any, the count variable will be set to zero, `GIT_PUSH_OPTION_COUNT=0`. ### post-update This hook is invoked by [git-receive-pack[1]](git-receive-pack) when it reacts to `git push` and updates reference(s) in its repository. It executes on the remote repository once after all the refs have been updated. It takes a variable number of parameters, each of which is the name of ref that was actually updated. This hook is meant primarily for notification, and cannot affect the outcome of `git receive-pack`. The `post-update` hook can tell what are the heads that were pushed, but it does not know what their original and updated values are, so it is a poor place to do log old..new. The [*post-receive*](#post-receive) hook does get both original and updated values of the refs. You might consider it instead if you need them. When enabled, the default `post-update` hook runs `git update-server-info` to keep the information used by dumb transports (e.g., HTTP) up to date. If you are publishing a Git repository that is accessible via HTTP, you should probably enable this hook. Both standard output and standard error output are forwarded to `git send-pack` on the other end, so you can simply `echo` messages for the user. ### reference-transaction This hook is invoked by any Git command that performs reference updates. It executes whenever a reference transaction is prepared, committed or aborted and may thus get called multiple times. The hook does not cover symbolic references (but that may change in the future). The hook takes exactly one argument, which is the current state the given reference transaction is in: * "prepared": All reference updates have been queued to the transaction and references were locked on disk. * "committed": The reference transaction was committed and all references now have their respective new value. * "aborted": The reference transaction was aborted, no changes were performed and the locks have been released. For each reference update that was added to the transaction, the hook receives on standard input a line of the format: ``` <old-value> SP <new-value> SP <ref-name> LF ``` where `<old-value>` is the old object name passed into the reference transaction, `<new-value>` is the new object name to be stored in the ref and `<ref-name>` is the full name of the ref. When force updating the reference regardless of its current value or when the reference is to be created anew, `<old-value>` is the all-zeroes object name. To distinguish these cases, you can inspect the current value of `<ref-name>` via `git rev-parse`. The exit status of the hook is ignored for any state except for the "prepared" state. In the "prepared" state, a non-zero exit status will cause the transaction to be aborted. The hook will not be called with "aborted" state in that case. ### push-to-checkout This hook is invoked by [git-receive-pack[1]](git-receive-pack) when it reacts to `git push` and updates reference(s) in its repository, and when the push tries to update the branch that is currently checked out and the `receive.denyCurrentBranch` configuration variable is set to `updateInstead`. Such a push by default is refused if the working tree and the index of the remote repository has any difference from the currently checked out commit; when both the working tree and the index match the current commit, they are updated to match the newly pushed tip of the branch. This hook is to be used to override the default behaviour. The hook receives the commit with which the tip of the current branch is going to be updated. It can exit with a non-zero status to refuse the push (when it does so, it must not modify the index or the working tree). Or it can make any necessary changes to the working tree and to the index to bring them to the desired state when the tip of the current branch is updated to the new commit, and exit with a zero status. For example, the hook can simply run `git read-tree -u -m HEAD "$1"` in order to emulate `git fetch` that is run in the reverse direction with `git push`, as the two-tree form of `git read-tree -u -m` is essentially the same as `git switch` or `git checkout` that switches branches while keeping the local changes in the working tree that do not interfere with the difference between the branches. ### pre-auto-gc This hook is invoked by `git gc --auto` (see [git-gc[1]](git-gc)). It takes no parameter, and exiting with non-zero status from this script causes the `git gc --auto` to abort. ### post-rewrite This hook is invoked by commands that rewrite commits ([git-commit[1]](git-commit) when called with `--amend` and [git-rebase[1]](git-rebase); however, full-history (re)writing tools like [git-fast-import[1]](git-fast-import) or [git-filter-repo](https://github.com/newren/git-filter-repo) typically do not call it!). Its first argument denotes the command it was invoked by: currently one of `amend` or `rebase`. Further command-dependent arguments may be passed in the future. The hook receives a list of the rewritten commits on stdin, in the format ``` <old-object-name> SP <new-object-name> [ SP <extra-info> ] LF ``` The `extra-info` is again command-dependent. If it is empty, the preceding SP is also omitted. Currently, no commands pass any `extra-info`. The hook always runs after the automatic note copying (see "notes.rewrite.<command>" in [git-config[1]](git-config)) has happened, and thus has access to these notes. The following command-specific comments apply: rebase For the `squash` and `fixup` operation, all commits that were squashed are listed as being rewritten to the squashed commit. This means that there will be several lines sharing the same `new-object-name`. The commits are guaranteed to be listed in the order that they were processed by rebase. ### sendemail-validate This hook is invoked by [git-send-email[1]](git-send-email). It takes a single parameter, the name of the file that holds the e-mail to be sent. Exiting with a non-zero status causes `git send-email` to abort before sending any e-mails. ### fsmonitor-watchman This hook is invoked when the configuration option `core.fsmonitor` is set to `.git/hooks/fsmonitor-watchman` or `.git/hooks/fsmonitor-watchmanv2` depending on the version of the hook to use. Version 1 takes two arguments, a version (1) and the time in elapsed nanoseconds since midnight, January 1, 1970. Version 2 takes two arguments, a version (2) and a token that is used for identifying changes since the token. For watchman this would be a clock id. This version must output to stdout the new token followed by a NUL before the list of files. The hook should output to stdout the list of all files in the working directory that may have changed since the requested time. The logic should be inclusive so that it does not miss any potential changes. The paths should be relative to the root of the working directory and be separated by a single NUL. It is OK to include files which have not actually changed. All changes including newly-created and deleted files should be included. When files are renamed, both the old and the new name should be included. Git will limit what files it checks for changes as well as which directories are checked for untracked files based on the path names given. An optimized way to tell git "all files have changed" is to return the filename `/`. The exit status determines whether git will use the data from the hook to limit its search. On error, it will fall back to verifying all files and folders. ### p4-changelist This hook is invoked by `git-p4 submit`. The `p4-changelist` hook is executed after the changelist message has been edited by the user. It can be bypassed with the `--no-verify` option. It takes a single parameter, the name of the file that holds the proposed changelist text. Exiting with a non-zero status causes the command to abort. The hook is allowed to edit the changelist file and can be used to normalize the text into some project standard format. It can also be used to refuse the Submit after inspect the message file. Run `git-p4 submit --help` for details. ### p4-prepare-changelist This hook is invoked by `git-p4 submit`. The `p4-prepare-changelist` hook is executed right after preparing the default changelist message and before the editor is started. It takes one parameter, the name of the file that contains the changelist text. Exiting with a non-zero status from the script will abort the process. The purpose of the hook is to edit the message file in place, and it is not suppressed by the `--no-verify` option. This hook is called even if `--prepare-p4-only` is set. Run `git-p4 submit --help` for details. ### p4-post-changelist This hook is invoked by `git-p4 submit`. The `p4-post-changelist` hook is invoked after the submit has successfully occurred in P4. It takes no parameters and is meant primarily for notification and cannot affect the outcome of the git p4 submit action. Run `git-p4 submit --help` for details. ### p4-pre-submit This hook is invoked by `git-p4 submit`. It takes no parameters and nothing from standard input. Exiting with non-zero status from this script prevent `git-p4 submit` from launching. It can be bypassed with the `--no-verify` command line option. Run `git-p4 submit --help` for details. ### post-index-change This hook is invoked when the index is written in read-cache.c do\_write\_locked\_index. The first parameter passed to the hook is the indicator for the working directory being updated. "1" meaning working directory was updated or "0" when the working directory was not updated. The second parameter passed to the hook is the indicator for whether or not the index was updated and the skip-worktree bit could have changed. "1" meaning skip-worktree bits could have been updated and "0" meaning they were not. Only one parameter should be set to "1" when the hook runs. The hook running passing "1", "1" should not be possible. See also -------- [git-hook[1]](git-hook)
programming_docs
git gitprotocol-http gitprotocol-http ================ Name ---- gitprotocol-http - Git HTTP-based protocols Synopsis -------- ``` <over-the-wire-protocol> ``` Description ----------- Git supports two HTTP based transfer protocols. A "dumb" protocol which requires only a standard HTTP server on the server end of the connection, and a "smart" protocol which requires a Git aware CGI (or server module). This document describes both protocols. As a design feature smart clients can automatically upgrade "dumb" protocol URLs to smart URLs. This permits all users to have the same published URL, and the peers automatically select the most efficient transport available to them. Url format ---------- URLs for Git repositories accessed by HTTP use the standard HTTP URL syntax documented by RFC 1738, so they are of the form: ``` http://<host>:<port>/<path>?<searchpart> ``` Within this documentation the placeholder `$GIT_URL` will stand for the http:// repository URL entered by the end-user. Servers SHOULD handle all requests to locations matching `$GIT_URL`, as both the "smart" and "dumb" HTTP protocols used by Git operate by appending additional path components onto the end of the user supplied `$GIT_URL` string. An example of a dumb client requesting for a loose object: ``` $GIT_URL: http://example.com:8080/git/repo.git URL request: http://example.com:8080/git/repo.git/objects/d0/49f6c27a2244e12041955e262a404c7faba355 ``` An example of a smart request to a catch-all gateway: ``` $GIT_URL: http://example.com/daemon.cgi?svc=git&q= URL request: http://example.com/daemon.cgi?svc=git&q=/info/refs&service=git-receive-pack ``` An example of a request to a submodule: ``` $GIT_URL: http://example.com/git/repo.git/path/submodule.git URL request: http://example.com/git/repo.git/path/submodule.git/info/refs ``` Clients MUST strip a trailing `/`, if present, from the user supplied `$GIT_URL` string to prevent empty path tokens (`//`) from appearing in any URL sent to a server. Compatible clients MUST expand `$GIT_URL/info/refs` as `foo/info/refs` and not `foo//info/refs`. Authentication -------------- Standard HTTP authentication is used if authentication is required to access a repository, and MAY be configured and enforced by the HTTP server software. Because Git repositories are accessed by standard path components server administrators MAY use directory based permissions within their HTTP server to control repository access. Clients SHOULD support Basic authentication as described by RFC 2617. Servers SHOULD support Basic authentication by relying upon the HTTP server placed in front of the Git server software. Servers SHOULD NOT require HTTP cookies for the purposes of authentication or access control. Clients and servers MAY support other common forms of HTTP based authentication, such as Digest authentication. Ssl --- Clients and servers SHOULD support SSL, particularly to protect passwords when relying on Basic HTTP authentication. Session state ------------- The Git over HTTP protocol (much like HTTP itself) is stateless from the perspective of the HTTP server side. All state MUST be retained and managed by the client process. This permits simple round-robin load-balancing on the server side, without needing to worry about state management. Clients MUST NOT require state management on the server side in order to function correctly. Servers MUST NOT require HTTP cookies in order to function correctly. Clients MAY store and forward HTTP cookies during request processing as described by RFC 2616 (HTTP/1.1). Servers SHOULD ignore any cookies sent by a client. General request processing -------------------------- Except where noted, all standard HTTP behavior SHOULD be assumed by both client and server. This includes (but is not necessarily limited to): If there is no repository at `$GIT_URL`, or the resource pointed to by a location matching `$GIT_URL` does not exist, the server MUST NOT respond with `200 OK` response. A server SHOULD respond with `404 Not Found`, `410 Gone`, or any other suitable HTTP status code which does not imply the resource exists as requested. If there is a repository at `$GIT_URL`, but access is not currently permitted, the server MUST respond with the `403 Forbidden` HTTP status code. Servers SHOULD support both HTTP 1.0 and HTTP 1.1. Servers SHOULD support chunked encoding for both request and response bodies. Clients SHOULD support both HTTP 1.0 and HTTP 1.1. Clients SHOULD support chunked encoding for both request and response bodies. Servers MAY return ETag and/or Last-Modified headers. Clients MAY revalidate cached entities by including If-Modified-Since and/or If-None-Match request headers. Servers MAY return `304 Not Modified` if the relevant headers appear in the request and the entity has not changed. Clients MUST treat `304 Not Modified` identical to `200 OK` by reusing the cached entity. Clients MAY reuse a cached entity without revalidation if the Cache-Control and/or Expires header permits caching. Clients and servers MUST follow RFC 2616 for cache controls. Discovering references ---------------------- All HTTP clients MUST begin either a fetch or a push exchange by discovering the references available on the remote repository. ### Dumb Clients HTTP clients that only support the "dumb" protocol MUST discover references by making a request for the special info/refs file of the repository. Dumb HTTP clients MUST make a `GET` request to `$GIT_URL/info/refs`, without any search/query parameters. ``` C: GET $GIT_URL/info/refs HTTP/1.0 ``` ``` S: 200 OK S: S: 95dcfa3633004da0049d3d0fa03f80589cbcaf31 refs/heads/maint S: d049f6c27a2244e12041955e262a404c7faba355 refs/heads/master S: 2cb58b79488a98d2721cea644875a8dd0026b115 refs/tags/v1.0 S: a3c2e2402b99163d1d59756e5f207ae21cccba4c refs/tags/v1.0^{} ``` The Content-Type of the returned info/refs entity SHOULD be `text/plain; charset=utf-8`, but MAY be any content type. Clients MUST NOT attempt to validate the returned Content-Type. Dumb servers MUST NOT return a return type starting with `application/x-git-`. Cache-Control headers MAY be returned to disable caching of the returned entity. When examining the response clients SHOULD only examine the HTTP status code. Valid responses are `200 OK`, or `304 Not Modified`. The returned content is a UNIX formatted text file describing each ref and its known value. The file SHOULD be sorted by name according to the C locale ordering. The file SHOULD NOT include the default ref named `HEAD`. ``` info_refs = *( ref_record ) ref_record = any_ref / peeled_ref ``` ``` any_ref = obj-id HTAB refname LF peeled_ref = obj-id HTAB refname LF obj-id HTAB refname "^{}" LF ``` ### Smart Clients HTTP clients that support the "smart" protocol (or both the "smart" and "dumb" protocols) MUST discover references by making a parameterized request for the info/refs file of the repository. The request MUST contain exactly one query parameter, `service=$servicename`, where `$servicename` MUST be the service name the client wishes to contact to complete the operation. The request MUST NOT contain additional query parameters. ``` C: GET $GIT_URL/info/refs?service=git-upload-pack HTTP/1.0 ``` dumb server reply: ``` S: 200 OK S: S: 95dcfa3633004da0049d3d0fa03f80589cbcaf31 refs/heads/maint S: d049f6c27a2244e12041955e262a404c7faba355 refs/heads/master S: 2cb58b79488a98d2721cea644875a8dd0026b115 refs/tags/v1.0 S: a3c2e2402b99163d1d59756e5f207ae21cccba4c refs/tags/v1.0^{} ``` smart server reply: ``` S: 200 OK S: Content-Type: application/x-git-upload-pack-advertisement S: Cache-Control: no-cache S: S: 001e# service=git-upload-pack\n S: 0000 S: 004895dcfa3633004da0049d3d0fa03f80589cbcaf31 refs/heads/maint\0multi_ack\n S: 003fd049f6c27a2244e12041955e262a404c7faba355 refs/heads/master\n S: 003c2cb58b79488a98d2721cea644875a8dd0026b115 refs/tags/v1.0\n S: 003fa3c2e2402b99163d1d59756e5f207ae21cccba4c refs/tags/v1.0^{}\n S: 0000 ``` The client may send Extra Parameters (see [gitprotocol-pack[5]](gitprotocol-pack)) as a colon-separated string in the Git-Protocol HTTP header. Uses the `--http-backend-info-refs` option to [git-upload-pack[1]](git-upload-pack). #### Dumb Server Response Dumb servers MUST respond with the dumb server reply format. See the prior section under dumb clients for a more detailed description of the dumb server response. #### Smart Server Response If the server does not recognize the requested service name, or the requested service name has been disabled by the server administrator, the server MUST respond with the `403 Forbidden` HTTP status code. Otherwise, smart servers MUST respond with the smart server reply format for the requested service name. Cache-Control headers SHOULD be used to disable caching of the returned entity. The Content-Type MUST be `application/x-$servicename-advertisement`. Clients SHOULD fall back to the dumb protocol if another content type is returned. When falling back to the dumb protocol clients SHOULD NOT make an additional request to `$GIT_URL/info/refs`, but instead SHOULD use the response already in hand. Clients MUST NOT continue if they do not support the dumb protocol. Clients MUST validate the status code is either `200 OK` or `304 Not Modified`. Clients MUST validate the first five bytes of the response entity matches the regex `^[0-9a-f]{4}#`. If this test fails, clients MUST NOT continue. Clients MUST parse the entire response as a sequence of pkt-line records. Clients MUST verify the first pkt-line is `# service=$servicename`. Servers MUST set $servicename to be the request parameter value. Servers SHOULD include an LF at the end of this line. Clients MUST ignore an LF at the end of the line. Servers MUST terminate the response with the magic `0000` end pkt-line marker. The returned response is a pkt-line stream describing each ref and its known value. The stream SHOULD be sorted by name according to the C locale ordering. The stream SHOULD include the default ref named `HEAD` as the first ref. The stream MUST include capability declarations behind a NUL on the first ref. The returned response contains "version 1" if "version=1" was sent as an Extra Parameter. ``` smart_reply = PKT-LINE("# service=$servicename" LF) "0000" *1("version 1") ref_list "0000" ref_list = empty_list / non_empty_list ``` ``` empty_list = PKT-LINE(zero-id SP "capabilities^{}" NUL cap-list LF) ``` ``` non_empty_list = PKT-LINE(obj-id SP name NUL cap_list LF) *ref_record ``` ``` cap-list = capability *(SP capability) capability = 1*(LC_ALPHA / DIGIT / "-" / "_") LC_ALPHA = %x61-7A ``` ``` ref_record = any_ref / peeled_ref any_ref = PKT-LINE(obj-id SP name LF) peeled_ref = PKT-LINE(obj-id SP name LF) PKT-LINE(obj-id SP name "^{}" LF ``` Smart service git-upload-pack ----------------------------- This service reads from the repository pointed to by `$GIT_URL`. Clients MUST first perform ref discovery with `$GIT_URL/info/refs?service=git-upload-pack`. ``` C: POST $GIT_URL/git-upload-pack HTTP/1.0 C: Content-Type: application/x-git-upload-pack-request C: C: 0032want 0a53e9ddeaddad63ad106860237bbf53411d11a7\n C: 0032have 441b40d833fdfa93eb2908e52742248faf0ee993\n C: 0000 ``` ``` S: 200 OK S: Content-Type: application/x-git-upload-pack-result S: Cache-Control: no-cache S: S: ....ACK %s, continue S: ....NAK ``` Clients MUST NOT reuse or revalidate a cached response. Servers MUST include sufficient Cache-Control headers to prevent caching of the response. Servers SHOULD support all capabilities defined here. Clients MUST send at least one "want" command in the request body. Clients MUST NOT reference an id in a "want" command which did not appear in the response obtained through ref discovery unless the server advertises capability `allow-tip-sha1-in-want` or `allow-reachable-sha1-in-want`. ``` compute_request = want_list have_list request_end request_end = "0000" / "done" ``` ``` want_list = PKT-LINE(want SP cap_list LF) *(want_pkt) want_pkt = PKT-LINE(want LF) want = "want" SP id cap_list = capability *(SP capability) ``` ``` have_list = *PKT-LINE("have" SP id LF) ``` TODO: Document this further. ### The Negotiation Algorithm The computation to select the minimal pack proceeds as follows (C = client, S = server): `init step:` C: Use ref discovery to obtain the advertised refs. C: Place any object seen into set `advertised`. C: Build an empty set, `common`, to hold the objects that are later determined to be on both ends. C: Build a set, `want`, of the objects from `advertised` the client wants to fetch, based on what it saw during ref discovery. C: Start a queue, `c_pending`, ordered by commit time (popping newest first). Add all client refs. When a commit is popped from the queue its parents SHOULD be automatically inserted back. Commits MUST only enter the queue once. `one compute step:` C: Send one `$GIT_URL/git-upload-pack` request: ``` C: 0032want <want #1>............................... C: 0032want <want #2>............................... .... C: 0032have <common #1>............................. C: 0032have <common #2>............................. .... C: 0032have <have #1>............................... C: 0032have <have #2>............................... .... C: 0000 ``` The stream is organized into "commands", with each command appearing by itself in a pkt-line. Within a command line, the text leading up to the first space is the command name, and the remainder of the line to the first LF is the value. Command lines are terminated with an LF as the last byte of the pkt-line value. Commands MUST appear in the following order, if they appear at all in the request stream: * "want" * "have" The stream is terminated by a pkt-line flush (`0000`). A single "want" or "have" command MUST have one hex formatted object name as its value. Multiple object names MUST be sent by sending multiple commands. Object names MUST be given using the object format negotiated through the `object-format` capability (default SHA-1). The `have` list is created by popping the first 32 commits from `c_pending`. Less can be supplied if `c_pending` empties. If the client has sent 256 "have" commits and has not yet received one of those back from `s_common`, or the client has emptied `c_pending` it SHOULD include a "done" command to let the server know it won’t proceed: ``` C: 0009done ``` S: Parse the git-upload-pack request: Verify all objects in `want` are directly reachable from refs. The server MAY walk backwards through history or through the reflog to permit slightly stale requests. If no "want" objects are received, send an error: TODO: Define error if no "want" lines are requested. If any "want" object is not reachable, send an error: TODO: Define error if an invalid "want" is requested. Create an empty list, `s_common`. If "have" was sent: Loop through the objects in the order supplied by the client. For each object, if the server has the object reachable from a ref, add it to `s_common`. If a commit is added to `s_common`, do not add any ancestors, even if they also appear in `have`. S: Send the git-upload-pack response: If the server has found a closed set of objects to pack or the request ends with "done", it replies with the pack. TODO: Document the pack based response ``` S: PACK... ``` The returned stream is the side-band-64k protocol supported by the git-upload-pack service, and the pack is embedded into stream 1. Progress messages from the server side MAY appear in stream 2. Here a "closed set of objects" is defined to have at least one path from every "want" to at least one "common" object. If the server needs more information, it replies with a status continue response: TODO: Document the non-pack response C: Parse the upload-pack response: TODO: Document parsing response `Do another compute step.` Smart service git-receive-pack ------------------------------ This service reads from the repository pointed to by `$GIT_URL`. Clients MUST first perform ref discovery with `$GIT_URL/info/refs?service=git-receive-pack`. ``` C: POST $GIT_URL/git-receive-pack HTTP/1.0 C: Content-Type: application/x-git-receive-pack-request C: C: ....0a53e9ddeaddad63ad106860237bbf53411d11a7 441b40d833fdfa93eb2908e52742248faf0ee993 refs/heads/maint\0 report-status C: 0000 C: PACK.... ``` ``` S: 200 OK S: Content-Type: application/x-git-receive-pack-result S: Cache-Control: no-cache S: S: .... ``` Clients MUST NOT reuse or revalidate a cached response. Servers MUST include sufficient Cache-Control headers to prevent caching of the response. Servers SHOULD support all capabilities defined here. Clients MUST send at least one command in the request body. Within the command portion of the request body clients SHOULD send the id obtained through ref discovery as old\_id. ``` update_request = command_list "PACK" <binary data> ``` ``` command_list = PKT-LINE(command NUL cap_list LF) *(command_pkt) command_pkt = PKT-LINE(command LF) cap_list = *(SP capability) SP ``` ``` command = create / delete / update create = zero-id SP new_id SP name delete = old_id SP zero-id SP name update = old_id SP new_id SP name ``` TODO: Document this further. References ---------- [RFC 1738: Uniform Resource Locators (URL)](http://www.ietf.org/rfc/rfc1738.txt) [RFC 2616: Hypertext Transfer Protocol — HTTP/1.1](http://www.ietf.org/rfc/rfc2616.txt) See also -------- [gitprotocol-pack[5]](gitprotocol-pack) [gitprotocol-capabilities[5]](gitprotocol-capabilities) git git-diff-files git-diff-files ============== Name ---- git-diff-files - Compares files in the working tree and the index Synopsis -------- ``` git diff-files [-q] [-0 | -1 | -2 | -3 | -c | --cc] [<common-diff-options>] [<path>…​] ``` Description ----------- Compares the files in the working tree and the index. When paths are specified, compares only those named paths. Otherwise all entries in the index are compared. The output format is the same as for `git diff-index` and `git diff-tree`. Options ------- -p -u --patch Generate patch (see section on generating patches). -s --no-patch Suppress diff output. Useful for commands like `git show` that show the patch by default, or to cancel the effect of `--patch`. -U<n> --unified=<n> Generate diffs with <n> lines of context instead of the usual three. Implies `--patch`. --output=<file> Output to a specific file instead of stdout. --output-indicator-new=<char> --output-indicator-old=<char> --output-indicator-context=<char> Specify the character used to indicate new, old or context lines in the generated patch. Normally they are `+`, `-` and ' ' respectively. --raw Generate the diff in raw format. This is the default. --patch-with-raw Synonym for `-p --raw`. --indent-heuristic Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default. --no-indent-heuristic Disable the indent heuristic. --minimal Spend extra time to make sure the smallest possible diff is produced. --patience Generate a diff using the "patience diff" algorithm. --histogram Generate a diff using the "histogram diff" algorithm. --anchored=<text> Generate a diff using the "anchored diff" algorithm. This option may be specified more than once. If a line exists in both the source and destination, exists only once, and starts with this text, this algorithm attempts to prevent it from appearing as a deletion or addition in the output. It uses the "patience diff" algorithm internally. --diff-algorithm={patience|minimal|histogram|myers} Choose a diff algorithm. The variants are as follows: `default`, `myers` The basic greedy diff algorithm. Currently, this is the default. `minimal` Spend extra time to make sure the smallest possible diff is produced. `patience` Use "patience diff" algorithm when generating patches. `histogram` This algorithm extends the patience algorithm to "support low-occurrence common elements". For instance, if you configured the `diff.algorithm` variable to a non-default value and want to use the default one, then you have to use `--diff-algorithm=default` option. --stat[=<width>[,<name-width>[,<count>]]] Generate a diffstat. By default, as much space as necessary will be used for the filename part, and the rest for the graph part. Maximum width defaults to terminal width, or 80 columns if not connected to a terminal, and can be overridden by `<width>`. The width of the filename part can be limited by giving another width `<name-width>` after a comma. The width of the graph part can be limited by using `--stat-graph-width=<width>` (affects all commands generating a stat graph) or by setting `diff.statGraphWidth=<width>` (does not affect `git format-patch`). By giving a third parameter `<count>`, you can limit the output to the first `<count>` lines, followed by `...` if there are more. These parameters can also be set individually with `--stat-width=<width>`, `--stat-name-width=<name-width>` and `--stat-count=<count>`. --compact-summary Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat. The information is put between the filename part and the graph part. Implies `--stat`. --numstat Similar to `--stat`, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly. For binary files, outputs two `-` instead of saying `0 0`. --shortstat Output only the last line of the `--stat` format containing total number of modified files, as well as number of added and deleted lines. -X[<param1,param2,…​>] --dirstat[=<param1,param2,…​>] Output the distribution of relative amount of changes for each sub-directory. The behavior of `--dirstat` can be customized by passing it a comma separated list of parameters. The defaults are controlled by the `diff.dirstat` configuration variable (see [git-config[1]](git-config)). The following parameters are available: `changes` Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination. This ignores the amount of pure code movements within a file. In other words, rearranging lines in a file is not counted as much as other changes. This is the default behavior when no parameter is given. `lines` Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts. (For binary files, count 64-byte chunks instead, since binary files have no natural concept of lines). This is a more expensive `--dirstat` behavior than the `changes` behavior, but it does count rearranged lines within a file as much as other changes. The resulting output is consistent with what you get from the other `--*stat` options. `files` Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis. This is the computationally cheapest `--dirstat` behavior, since it does not have to look at the file contents at all. `cumulative` Count changes in a child directory for the parent directory as well. Note that when using `cumulative`, the sum of the percentages reported may exceed 100%. The default (non-cumulative) behavior can be specified with the `noncumulative` parameter. <limit> An integer parameter specifies a cut-off percent (3% by default). Directories contributing less than this percentage of the changes are not shown in the output. Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: `--dirstat=files,10,cumulative`. --cumulative Synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>…​] Synonym for --dirstat=files,param1,param2…​ --summary Output a condensed summary of extended header information such as creations, renames and mode changes. --patch-with-stat Synonym for `-p --stat`. -z When `--raw`, `--numstat`, `--name-only` or `--name-status` has been given, do not munge pathnames and use NULs as output field terminators. Without this option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). --name-only Show only names of changed files. The file names are often encoded in UTF-8. For more information see the discussion about encoding in the [git-log[1]](git-log) manual page. --name-status Show only names and status of changed files. See the description of the `--diff-filter` option on what the status letters mean. Just like `--name-only` the file names are often encoded in UTF-8. --submodule[=<format>] Specify how differences in submodules are shown. When specifying `--submodule=short` the `short` format is used. This format just shows the names of the commits at the beginning and end of the range. When `--submodule` or `--submodule=log` is specified, the `log` format is used. This format lists the commits in the range like [git-submodule[1]](git-submodule) `summary` does. When `--submodule=diff` is specified, the `diff` format is used. This format shows an inline diff of the changes in the submodule contents between the commit range. Defaults to `diff.submodule` or the `short` format if the config option is unset. --color[=<when>] Show colored diff. `--color` (i.e. without `=<when>`) is the same as `--color=always`. `<when>` can be one of `always`, `never`, or `auto`. --no-color Turn off colored diff. It is the same as `--color=never`. --color-moved[=<mode>] Moved lines of code are colored differently. The <mode> defaults to `no` if the option is not given and to `zebra` if the option with no mode is given. The mode must be one of: no Moved lines are not highlighted. default Is a synonym for `zebra`. This may change to a more sensible mode in the future. plain Any line that is added in one location and was removed in another location will be colored with `color.diff.newMoved`. Similarly `color.diff.oldMoved` will be used for removed lines that are added somewhere else in the diff. This mode picks up any moved line, but it is not very useful in a review to determine if a block of code was moved without permutation. blocks Blocks of moved text of at least 20 alphanumeric characters are detected greedily. The detected blocks are painted using either the `color.diff.{old,new}Moved` color. Adjacent blocks cannot be told apart. zebra Blocks of moved text are detected as in `blocks` mode. The blocks are painted using either the `color.diff.{old,new}Moved` color or `color.diff.{old,new}MovedAlternative`. The change between the two colors indicates that a new block was detected. dimmed-zebra Similar to `zebra`, but additional dimming of uninteresting parts of moved code is performed. The bordering lines of two adjacent blocks are considered interesting, the rest is uninteresting. `dimmed_zebra` is a deprecated synonym. --no-color-moved Turn off move detection. This can be used to override configuration settings. It is the same as `--color-moved=no`. --color-moved-ws=<modes> This configures how whitespace is ignored when performing the move detection for `--color-moved`. These modes can be given as a comma separated list: no Do not ignore whitespace when performing move detection. ignore-space-at-eol Ignore changes in whitespace at EOL. ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. allow-indentation-change Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line. This is incompatible with the other modes. --no-color-moved-ws Do not ignore whitespace when performing move detection. This can be used to override configuration settings. It is the same as `--color-moved-ws=no`. --word-diff[=<mode>] Show a word diff, using the <mode> to delimit changed words. By default, words are delimited by whitespace; see `--word-diff-regex` below. The <mode> defaults to `plain`, and must be one of: color Highlight changed words using only colors. Implies `--color`. plain Show words as `[-removed-]` and `{+added+}`. Makes no attempts to escape the delimiters if they appear in the input, so the output may be ambiguous. porcelain Use a special line-based format intended for script consumption. Added/removed/unchanged runs are printed in the usual unified diff format, starting with a `+`/`-`/` ` character at the beginning of the line and extending to the end of the line. Newlines in the input are represented by a tilde `~` on a line of its own. none Disable word diff again. Note that despite the name of the first mode, color is used to highlight the changed parts in all modes if enabled. --word-diff-regex=<regex> Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word. Also implies `--word-diff` unless it was already enabled. Every non-overlapping match of the <regex> is considered a word. Anything between these matches is considered whitespace and ignored(!) for the purposes of finding differences. You may want to append `|[^[:space:]]` to your regular expression to make sure that it matches all non-whitespace characters. A match that contains a newline is silently truncated(!) at the newline. For example, `--word-diff-regex=.` will treat each character as a word and, correspondingly, show differences character by character. The regex can also be set via a diff driver or configuration option, see [gitattributes[5]](gitattributes) or [git-config[1]](git-config). Giving it explicitly overrides any diff driver or configuration setting. Diff drivers override configuration settings. --color-words[=<regex>] Equivalent to `--word-diff=color` plus (if a regex was specified) `--word-diff-regex=<regex>`. --no-renames Turn off rename detection, even when the configuration file gives the default to do so. --[no-]rename-empty Whether to use empty blobs as rename source. --check Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by `core.whitespace` configuration. By default, trailing whitespaces (including lines that consist solely of whitespaces) and a space character that is immediately followed by a tab character inside the initial indent of the line are considered whitespace errors. Exits with non-zero status if problems are found. Not compatible with --exit-code. --ws-error-highlight=<kind> Highlight whitespace errors in the `context`, `old` or `new` lines of the diff. Multiple values are separated by comma, `none` resets previous values, `default` reset the list to `new` and `all` is a shorthand for `old,new,context`. When this option is not given, and the configuration variable `diff.wsErrorHighlight` is not set, only whitespace errors in `new` lines are highlighted. The whitespace errors are colored with `color.diff.whitespace`. --full-index Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output. --binary In addition to `--full-index`, output a binary diff that can be applied with `git-apply`. Implies `--patch`. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show the shortest prefix that is at least `<n>` hexdigits long that uniquely refers the object. In diff-patch output format, `--full-index` takes higher precedence, i.e. if `--full-index` is specified, full blob names will be shown regardless of `--abbrev`. Non default number of digits can be specified with `--abbrev=<n>`. -B[<n>][/<m>] --break-rewrites[=[<n>][/<m>]] Break complete rewrite changes into pairs of delete and create. This serves two purposes: It affects the way a change that amounts to a total rewrite of a file not as a series of deletion and insertion mixed together with a very few lines that happen to match textually as the context, but as a single deletion of everything old followed by a single insertion of everything new, and the number `m` controls this aspect of the -B option (defaults to 60%). `-B/70%` specifies that less than 30% of the original should remain in the result for Git to consider it a total rewrite (i.e. otherwise the resulting patch will be a series of deletion and insertion mixed together with context lines). When used with -M, a totally-rewritten file is also considered as the source of a rename (usually -M only considers a file that disappeared as the source of a rename), and the number `n` controls this aspect of the -B option (defaults to 50%). `-B20%` specifies that a change with addition and deletion compared to 20% or more of the file’s size are eligible for being picked up as a possible source of a rename to another file. -M[<n>] --find-renames[=<n>] Detect renames. If `n` is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size). For example, `-M90%` means Git should consider a delete/add pair to be a rename if more than 90% of the file hasn’t changed. Without a `%` sign, the number is to be read as a fraction, with a decimal point before it. I.e., `-M5` becomes 0.5, and is thus the same as `-M50%`. Similarly, `-M05` is the same as `-M5%`. To limit detection to exact renames, use `-M100%`. The default similarity index is 50%. -C[<n>] --find-copies[=<n>] Detect copies as well as renames. See also `--find-copies-harder`. If `n` is specified, it has the same meaning as for `-M<n>`. --find-copies-harder For performance reasons, by default, `-C` option finds copies only if the original file of the copy was modified in the same changeset. This flag makes the command inspect unmodified files as candidates for the source of copy. This is a very expensive operation for large projects, so use it with caution. Giving more than one `-C` option has the same effect. -D --irreversible-delete Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and `/dev/null`. The resulting patch is not meant to be applied with `patch` or `git apply`; this is solely for people who want to just concentrate on reviewing the text after the change. In addition, the output obviously lacks enough information to apply such a patch in reverse, even manually, hence the name of the option. When used together with `-B`, omit also the preimage in the deletion part of a delete/create pair. -l<num> The `-M` and `-C` options involve some preliminary steps that can detect subsets of renames/copies cheaply, followed by an exhaustive fallback portion that compares all remaining unpaired destinations to all relevant sources. (For renames, only remaining unpaired sources are relevant; for copies, all original sources are relevant.) For N sources and destinations, this exhaustive check is O(N^2). This option prevents the exhaustive portion of rename/copy detection from running if the number of source/destination files involved exceeds the specified number. Defaults to diff.renameLimit. Note that a value of 0 is treated as unlimited. --diff-filter=[(A|C|D|M|R|T|U|X|B)…​[\*]] Select only files that are Added (`A`), Copied (`C`), Deleted (`D`), Modified (`M`), Renamed (`R`), have their type (i.e. regular file, symlink, submodule, …​) changed (`T`), are Unmerged (`U`), are Unknown (`X`), or have had their pairing Broken (`B`). Any combination of the filter characters (including none) can be used. When `*` (All-or-none) is added to the combination, all paths are selected if there is any file that matches other criteria in the comparison; if there is no file that matches other criteria, nothing is selected. Also, these upper-case letters can be downcased to exclude. E.g. `--diff-filter=ad` excludes added and deleted paths. Note that not all diffs can feature all types. For instance, copied and renamed entries cannot appear if detection for those types is disabled. -S<string> Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use. It is useful when you’re looking for an exact block of code (like a struct), and want to know the history of that block since it first came into being: use the feature iteratively to feed the interesting block in the preimage back into `-S`, and keep going until you get the very first version of the block. Binary files are searched as well. -G<regex> Look for differences whose patch text contains added/removed lines that match <regex>. To illustrate the difference between `-S<regex> --pickaxe-regex` and `-G<regex>`, consider a commit with the following diff in the same file: ``` + return frotz(nitfol, two->ptr, 1, 0); ... - hit = frotz(nitfol, mf2.ptr, 1, 0); ``` While `git log -G"frotz\(nitfol"` will show this commit, `git log -S"frotz\(nitfol" --pickaxe-regex` will not (because the number of occurrences of that string did not change). Unless `--text` is supplied patches of binary files without a textconv filter will be ignored. See the `pickaxe` entry in [gitdiffcore[7]](gitdiffcore) for more information. --find-object=<object-id> Look for differences that change the number of occurrences of the specified object. Similar to `-S`, just the argument is different in that it doesn’t search for a specific string but for a specific object id. The object can be a blob or a submodule commit. It implies the `-t` option in `git-log` to also find trees. --pickaxe-all When `-S` or `-G` finds a change, show all the changes in that changeset, not just the files that contain the change in <string>. --pickaxe-regex Treat the <string> given to `-S` as an extended POSIX regular expression to match. -O<orderfile> Control the order in which files appear in the output. This overrides the `diff.orderFile` configuration variable (see [git-config[1]](git-config)). To cancel `diff.orderFile`, use `-O/dev/null`. The output order is determined by the order of glob patterns in <orderfile>. All files with pathnames that match the first pattern are output first, all files with pathnames that match the second pattern (but not the first) are output next, and so on. All files with pathnames that do not match any pattern are output last, as if there was an implicit match-all pattern at the end of the file. If multiple pathnames have the same rank (they match the same pattern but no earlier patterns), their output order relative to each other is the normal order. <orderfile> is parsed as follows: * Blank lines are ignored, so they can be used as separators for readability. * Lines starting with a hash ("`#`") are ignored, so they can be used for comments. Add a backslash ("`\`") to the beginning of the pattern if it starts with a hash. * Each other line contains a single pattern. Patterns have the same syntax and semantics as patterns used for fnmatch(3) without the FNM\_PATHNAME flag, except a pathname also matches a pattern if removing any number of the final pathname components matches the pattern. For example, the pattern "`foo*bar`" matches "`fooasdfbar`" and "`foo/bar/baz/asdf`" but not "`foobarx`". --skip-to=<file> --rotate-to=<file> Discard the files before the named <file> from the output (i.e. `skip to`), or move them to the end of the output (i.e. `rotate to`). These were invented primarily for use of the `git difftool` command, and may not be very useful otherwise. -R Swap two inputs; that is, show differences from index or on-disk file to tree contents. --relative[=<path>] --no-relative When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option. When you are not in a subdirectory (e.g. in a bare repository), you can name which subdirectory to make the output relative to by giving a <path> as an argument. `--no-relative` can be used to countermand both `diff.relative` config option and previous `--relative`. -a --text Treat all files as text. --ignore-cr-at-eol Ignore carriage-return at the end of line when doing a comparison. --ignore-space-at-eol Ignore changes in whitespace at EOL. -b --ignore-space-change Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent. -w --ignore-all-space Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none. --ignore-blank-lines Ignore changes whose lines are all blank. -I<regex> --ignore-matching-lines=<regex> Ignore changes whose all lines match <regex>. This option may be specified more than once. --inter-hunk-context=<lines> Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other. Defaults to `diff.interHunkContext` or 0 if the config option is unset. -W --function-context Show whole function as context lines for each change. The function names are determined in the same way as `git diff` works out patch hunk headers (see `Defining a custom hunk-header` in [gitattributes[5]](gitattributes)). --exit-code Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences. --quiet Disable all output of the program. Implies `--exit-code`. --ext-diff Allow an external diff helper to be executed. If you set an external diff driver with [gitattributes[5]](gitattributes), you need to use this option with [git-log[1]](git-log) and friends. --no-ext-diff Disallow external diff drivers. --textconv --no-textconv Allow (or disallow) external text conversion filters to be run when comparing binary files. See [gitattributes[5]](gitattributes) for details. Because textconv filters are typically a one-way conversion, the resulting diff is suitable for human consumption, but cannot be applied. For this reason, textconv filters are enabled by default only for [git-diff[1]](git-diff) and [git-log[1]](git-log), but not for [git-format-patch[1]](git-format-patch) or diff plumbing commands. --ignore-submodules[=<when>] Ignore changes to submodules in the diff generation. <when> can be either "none", "untracked", "dirty" or "all", which is the default. Using "none" will consider the submodule modified when it either contains untracked or modified files or its HEAD differs from the commit recorded in the superproject and can be used to override any settings of the `ignore` option in [git-config[1]](git-config) or [gitmodules[5]](gitmodules). When "untracked" is used submodules are not considered dirty when they only contain untracked content (but they are still scanned for modified content). Using "dirty" ignores all changes to the work tree of submodules, only changes to the commits stored in the superproject are shown (this was the behavior until 1.7.0). Using "all" hides all changes to submodules. --src-prefix=<prefix> Show the given source prefix instead of "a/". --dst-prefix=<prefix> Show the given destination prefix instead of "b/". --no-prefix Do not show any source or destination prefix. --line-prefix=<prefix> Prepend an additional prefix to every line of output. --ita-invisible-in-index By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached". This option makes the entry appear as a new file in "git diff" and non-existent in "git diff --cached". This option could be reverted with `--ita-visible-in-index`. Both options are experimental and could be removed in future. For more detailed explanation on these common options, see also [gitdiffcore[7]](gitdiffcore). -1 --base -2 --ours -3 --theirs -0 Diff against the "base" version, "our branch" or "their branch" respectively. With these options, diffs for merged entries are not shown. The default is to diff against our branch (-2) and the cleanly resolved paths. The option -0 can be given to omit diff output for unmerged entries and just show "Unmerged". -c --cc This compares stage 2 (our branch), stage 3 (their branch) and the working tree file and outputs a combined diff, similar to the way `diff-tree` shows a merge commit with these flags. -q Remain silent even on nonexistent files Raw output format ----------------- The raw output format from "git-diff-index", "git-diff-tree", "git-diff-files" and "git diff --raw" are very similar. These commands all compare two sets of things; what is compared differs: git-diff-index <tree-ish> compares the <tree-ish> and the files on the filesystem. git-diff-index --cached <tree-ish> compares the <tree-ish> and the index. git-diff-tree [-r] <tree-ish-1> <tree-ish-2> [<pattern>…​] compares the trees named by the two arguments. git-diff-files [<pattern>…​] compares the index and the files on the filesystem. The "git-diff-tree" command begins its output by printing the hash of what is being compared. After that, all the commands print one output line per changed file. An output line is formatted this way: ``` in-place edit :100644 100644 bcd1234 0123456 M file0 copy-edit :100644 100644 abcd123 1234567 C68 file1 file2 rename-edit :100644 100644 abcd123 1234567 R86 file1 file3 create :000000 100644 0000000 1234567 A file4 delete :100644 000000 1234567 0000000 D file5 unmerged :000000 000000 0000000 0000000 U file6 ``` That is, from the left to the right: 1. a colon. 2. mode for "src"; 000000 if creation or unmerged. 3. a space. 4. mode for "dst"; 000000 if deletion or unmerged. 5. a space. 6. sha1 for "src"; 0{40} if creation or unmerged. 7. a space. 8. sha1 for "dst"; 0{40} if deletion, unmerged or "work tree out of sync with the index". 9. a space. 10. status, followed by optional "score" number. 11. a tab or a NUL when `-z` option is used. 12. path for "src" 13. a tab or a NUL when `-z` option is used; only exists for C or R. 14. path for "dst"; only exists for C or R. 15. an LF or a NUL when `-z` option is used, to terminate the record. Possible status letters are: * A: addition of a file * C: copy of a file into a new one * D: deletion of a file * M: modification of the contents or mode of a file * R: renaming of a file * T: change in the type of the file (regular file, symbolic link or submodule) * U: file is unmerged (you must complete the merge before it can be committed) * X: "unknown" change type (most probably a bug, please report it) Status letters C and R are always followed by a score (denoting the percentage of similarity between the source and target of the move or copy). Status letter M may be followed by a score (denoting the percentage of dissimilarity) for file rewrites. The sha1 for "dst" is shown as all 0’s if a file on the filesystem is out of sync with the index. Example: ``` :100644 100644 5be4a4a 0000000 M file.c ``` Without the `-z` option, pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). Using `-z` the filename is output verbatim and the line is terminated by a NUL byte. Diff format for merges ---------------------- "git-diff-tree", "git-diff-files" and "git-diff --raw" can take `-c` or `--cc` option to generate diff output also for merge commits. The output differs from the format described above in the following way: 1. there is a colon for each parent 2. there are more "src" modes and "src" sha1 3. status is concatenated status characters for each parent 4. no optional "score" number 5. tab-separated pathname(s) of the file For `-c` and `--cc`, only the destination or final path is shown even if the file was renamed on any side of history. With `--combined-all-paths`, the name of the path in each parent is shown followed by the name of the path in the merge commit. Examples for `-c` and `--cc` without `--combined-all-paths`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR phooey.c ``` Examples when `--combined-all-paths` added to either `-c` or `--cc`: ``` ::100644 100644 100644 fabadb8 cc95eb0 4866510 MM desc.c desc.c desc.c ::100755 100755 100755 52b7a2d 6d1ac04 d2ac7d7 RM foo.sh bar.sh bar.sh ::100644 100644 100644 e07d6c5 9042e82 ee91881 RR fooey.c fuey.c phooey.c ``` Note that `combined diff` lists only files which were modified from all parents. Generating patch text with -p ----------------------------- Running [git-diff[1]](git-diff), [git-log[1]](git-log), [git-show[1]](git-show), [git-diff-index[1]](git-diff-index), [git-diff-tree[1]](git-diff-tree), or [git-diff-files[1]](git-diff-files) with the `-p` option produces patch text. You can customize the creation of patch text via the `GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables (see [git[1]](git)), and the `diff` attribute (see [gitattributes[5]](gitattributes)). What the -p option produces is slightly different from the traditional diff format: 1. It is preceded with a "git diff" header that looks like this: ``` diff --git a/file1 b/file2 ``` The `a/` and `b/` filenames are the same unless rename/copy is involved. Especially, even for a creation or a deletion, `/dev/null` is `not` used in place of the `a/` or `b/` filenames. When rename/copy is involved, `file1` and `file2` show the name of the source file of the rename/copy and the name of the file that rename/copy produces, respectively. 2. It is followed by one or more extended header lines: ``` old mode <mode> new mode <mode> deleted file mode <mode> new file mode <mode> copy from <path> copy to <path> rename from <path> rename to <path> similarity index <number> dissimilarity index <number> index <hash>..<hash> <mode> ``` File modes are printed as 6-digit octal numbers including the file type and file permission bits. Path names in extended headers do not include the `a/` and `b/` prefixes. The similarity index is the percentage of unchanged lines, and the dissimilarity index is the percentage of changed lines. It is a rounded down integer, followed by a percent sign. The similarity index value of 100% is thus reserved for two equal files, while 100% dissimilarity means that no line from the old file made it into the new one. The index line includes the blob object names before and after the change. The <mode> is included if the file mode does not change; otherwise, separate lines indicate the old and the new mode. 3. Pathnames with "unusual" characters are quoted as explained for the configuration variable `core.quotePath` (see [git-config[1]](git-config)). 4. All the `file1` files in the output refer to files before the commit, and all the `file2` files refer to files after the commit. It is incorrect to apply each change to each file sequentially. For example, this patch will swap a and b: ``` diff --git a/a b/b rename from a rename to b diff --git a/b b/a rename from b rename to a ``` 5. Hunk headers mention the name of the function to which the hunk applies. See "Defining a custom hunk-header" in [gitattributes[5]](gitattributes) for details of how to tailor to this to specific languages. Combined diff format -------------------- Any diff-generating command can take the `-c` or `--cc` option to produce a `combined diff` when showing a merge. This is the default format when showing merges with [git-diff[1]](git-diff) or [git-show[1]](git-show). Note also that you can give suitable `--diff-merges` option to any of these commands to force generation of diffs in specific format. A "combined diff" format looks like this: ``` diff --combined describe.c index fabadb8,cc95eb0..4866510 --- a/describe.c +++ b/describe.c @@@ -98,20 -98,12 +98,20 @@@ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1; } - static void describe(char *arg) -static void describe(struct commit *cmit, int last_one) ++static void describe(char *arg, int last_one) { + unsigned char sha1[20]; + struct commit *cmit; struct commit_list *list; static int initialized = 0; struct commit_name *n; + if (get_sha1(arg, sha1) < 0) + usage(describe_usage); + cmit = lookup_commit_reference(sha1); + if (!cmit) + usage(describe_usage); + if (!initialized) { initialized = 1; for_each_ref(get_name); ``` 1. It is preceded with a "git diff" header, that looks like this (when the `-c` option is used): ``` diff --combined file ``` or like this (when the `--cc` option is used): ``` diff --cc file ``` 2. It is followed by one or more extended header lines (this example shows a merge with two parents): ``` index <hash>,<hash>..<hash> mode <mode>,<mode>..<mode> new file mode <mode> deleted file mode <mode>,<mode> ``` The `mode <mode>,<mode>..<mode>` line appears only if at least one of the <mode> is different from the rest. Extended headers with information about detected contents movement (renames and copying detection) are designed to work with diff of two <tree-ish> and are not used by combined diff format. 3. It is followed by two-line from-file/to-file header ``` --- a/file +++ b/file ``` Similar to two-line header for traditional `unified` diff format, `/dev/null` is used to signal created or deleted files. However, if the --combined-all-paths option is provided, instead of a two-line from-file/to-file you get a N+1 line from-file/to-file header, where N is the number of parents in the merge commit ``` --- a/file --- a/file --- a/file +++ b/file ``` This extended format can be useful if rename or copy detection is active, to allow you to see the original name of the file in different parents. 4. Chunk header format is modified to prevent people from accidentally feeding it to `patch -p1`. Combined diff format was created for review of merge commit changes, and was not meant to be applied. The change is similar to the change in the extended `index` header: ``` @@@ <from-file-range> <from-file-range> <to-file-range> @@@ ``` There are (number of parents + 1) `@` characters in the chunk header for combined diff format. Unlike the traditional `unified` diff format, which shows two files A and B with a single column that has `-` (minus — appears in A but removed in B), `+` (plus — missing in A but added to B), or `" "` (space — unchanged) prefix, this format compares two or more files file1, file2,…​ with one file X, and shows how X differs from each of fileN. One column for each of fileN is prepended to the output line to note how X’s line is different from it. A `-` character in the column N means that the line appears in fileN but it does not appear in the result. A `+` character in the column N means that the line appears in the result, and fileN does not have that line (in other words, the line was added, from the point of view of that parent). In the above example output, the function signature was changed from both files (hence two `-` removals from both file1 and file2, plus `++` to mean one line that was added does not appear in either file1 or file2). Also eight other lines are the same from file1 but do not appear in file2 (hence prefixed with `+`). When shown by `git diff-tree -c`, it compares the parents of a merge commit with the merge result (i.e. file1..fileN are the parents). When shown by `git diff-files -c`, it compares the two unresolved merge parents with the working tree file (i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka "their version"). Other diff formats ------------------ The `--summary` option describes newly added, deleted, renamed and copied files. The `--stat` option adds diffstat(1) graph to the output. These options can be combined with other options, such as `-p`, and are meant for human consumption. When showing a change that involves a rename or a copy, `--stat` output formats the pathnames compactly by combining common prefix and suffix of the pathnames. For example, a change that moves `arch/i386/Makefile` to `arch/x86/Makefile` while modifying 4 lines will be shown like this: ``` arch/{i386 => x86}/Makefile | 4 +-- ``` The `--numstat` option gives the diffstat(1) information but is designed for easier machine consumption. An entry in `--numstat` output looks like this: ``` 1 2 README 3 1 arch/{i386 => x86}/Makefile ``` That is, from left to right: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. pathname (possibly with rename/copy information); 6. a newline. When `-z` output option is in effect, the output is formatted this way: ``` 1 2 README NUL 3 1 NUL arch/i386/Makefile NUL arch/x86/Makefile NUL ``` That is: 1. the number of added lines; 2. a tab; 3. the number of deleted lines; 4. a tab; 5. a NUL (only exists if renamed/copied); 6. pathname in preimage; 7. a NUL (only exists if renamed/copied); 8. pathname in postimage (only exists if renamed/copied); 9. a NUL. The extra `NUL` before the preimage path in renamed case is to allow scripts that read the output to tell if the current record being read is a single-path record or a rename/copy record without reading ahead. After reading added and deleted lines, reading up to `NUL` would yield the pathname, but if that is `NUL`, the record will show two paths.
programming_docs
git git-cherry-pick git-cherry-pick =============== Name ---- git-cherry-pick - Apply the changes introduced by some existing commits Synopsis -------- ``` git cherry-pick [--edit] [-n] [-m <parent-number>] [-s] [-x] [--ff] [-S[<keyid>]] <commit>…​ git cherry-pick (--continue | --skip | --abort | --quit) ``` Description ----------- Given one or more existing commits, apply the change each one introduces, recording a new commit for each. This requires your working tree to be clean (no modifications from the HEAD commit). When it is not obvious how to apply a change, the following happens: 1. The current branch and `HEAD` pointer stay at the last commit successfully made. 2. The `CHERRY_PICK_HEAD` ref is set to point at the commit that introduced the change that is difficult to apply. 3. Paths in which the change applied cleanly are updated both in the index file and in your working tree. 4. For conflicting paths, the index file records up to three versions, as described in the "TRUE MERGE" section of [git-merge[1]](git-merge). The working tree files will include a description of the conflict bracketed by the usual conflict markers `<<<<<<<` and `>>>>>>>`. 5. No other modifications are made. See [git-merge[1]](git-merge) for some hints on resolving such conflicts. Options ------- <commit>…​ Commits to cherry-pick. For a more complete list of ways to spell commits, see [gitrevisions[7]](gitrevisions). Sets of commits can be passed but no traversal is done by default, as if the `--no-walk` option was specified, see [git-rev-list[1]](git-rev-list). Note that specifying a range will feed all <commit>…​ arguments to a single revision walk (see a later example that uses `maint master..next`). -e --edit With this option, `git cherry-pick` will let you edit the commit message prior to committing. --cleanup=<mode> This option determines how the commit message will be cleaned up before being passed on to the commit machinery. See [git-commit[1]](git-commit) for more details. In particular, if the `<mode>` is given a value of `scissors`, scissors will be appended to `MERGE_MSG` before being passed on in the case of a conflict. -x When recording the commit, append a line that says "(cherry picked from commit …​)" to the original commit message in order to indicate which commit this change was cherry-picked from. This is done only for cherry picks without conflicts. Do not use this option if you are cherry-picking from your private branch because the information is useless to the recipient. If on the other hand you are cherry-picking between two publicly visible branches (e.g. backporting a fix to a maintenance branch for an older release from a development branch), adding this information can be useful. -r It used to be that the command defaulted to do `-x` described above, and `-r` was to disable it. Now the default is not to do `-x` so this option is a no-op. -m <parent-number> --mainline <parent-number> Usually you cannot cherry-pick a merge because you do not know which side of the merge should be considered the mainline. This option specifies the parent number (starting from 1) of the mainline and allows cherry-pick to replay the change relative to the specified parent. -n --no-commit Usually the command automatically creates a sequence of commits. This flag applies the changes necessary to cherry-pick each named commit to your working tree and the index, without making any commit. In addition, when this option is used, your index does not have to match the HEAD commit. The cherry-pick is done against the beginning state of your index. This is useful when cherry-picking more than one commits' effect to your index in a row. -s --signoff Add a `Signed-off-by` trailer at the end of the commit message. See the signoff option in [git-commit[1]](git-commit) for more information. -S[<keyid>] --gpg-sign[=<keyid>] --no-gpg-sign GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. `--no-gpg-sign` is useful to countermand both `commit.gpgSign` configuration variable, and earlier `--gpg-sign`. --ff If the current HEAD is the same as the parent of the cherry-pick’ed commit, then a fast forward to this commit will be performed. --allow-empty By default, cherry-picking an empty commit will fail, indicating that an explicit invocation of `git commit --allow-empty` is required. This option overrides that behavior, allowing empty commits to be preserved automatically in a cherry-pick. Note that when "--ff" is in effect, empty commits that meet the "fast-forward" requirement will be kept even without this option. Note also, that use of this option only keeps commits that were initially empty (i.e. the commit recorded the same tree as its parent). Commits which are made empty due to a previous commit are dropped. To force the inclusion of those commits use `--keep-redundant-commits`. --allow-empty-message By default, cherry-picking a commit with an empty message will fail. This option overrides that behavior, allowing commits with empty messages to be cherry picked. --keep-redundant-commits If a commit being cherry picked duplicates a commit already in the current history, it will become empty. By default these redundant commits cause `cherry-pick` to stop so the user can examine the commit. This option overrides that behavior and creates an empty commit object. Implies `--allow-empty`. --strategy=<strategy> Use the given merge strategy. Should only be used once. See the MERGE STRATEGIES section in [git-merge[1]](git-merge) for details. -X<option> --strategy-option=<option> Pass the merge strategy-specific option through to the merge strategy. See [git-merge[1]](git-merge) for details. --rerere-autoupdate --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. `--no-rerere-autoupdate` is a good way to double-check what `rerere` did and catch potential mismerges, before committing the result to the index with a separate `git add`. Sequencer subcommands --------------------- --continue Continue the operation in progress using the information in `.git/sequencer`. Can be used to continue after resolving conflicts in a failed cherry-pick or revert. --skip Skip the current commit and continue with the rest of the sequence. --quit Forget about the current operation in progress. Can be used to clear the sequencer state after a failed cherry-pick or revert. --abort Cancel the operation and return to the pre-sequence state. Examples -------- `git cherry-pick master` Apply the change introduced by the commit at the tip of the master branch and create a new commit with this change. `git cherry-pick ..master` `git cherry-pick ^HEAD master` Apply the changes introduced by all commits that are ancestors of master but not of HEAD to produce new commits. `git cherry-pick maint next ^master` `git cherry-pick maint master..next` Apply the changes introduced by all commits that are ancestors of maint or next, but not master or any of its ancestors. Note that the latter does not mean `maint` and everything between `master` and `next`; specifically, `maint` will not be used if it is included in `master`. `git cherry-pick master~4 master~2` Apply the changes introduced by the fifth and third last commits pointed to by master and create 2 new commits with these changes. `git cherry-pick -n master~1 next` Apply to the working tree and the index the changes introduced by the second last commit pointed to by master and by the last commit pointed to by next, but do not create any commit with these changes. `git cherry-pick --ff ..next` If history is linear and HEAD is an ancestor of next, update the working tree and advance the HEAD pointer to match next. Otherwise, apply the changes introduced by those commits that are in next but not HEAD to the current branch, creating a new commit for each new change. `git rev-list --reverse master -- README | git cherry-pick -n --stdin` Apply the changes introduced by all commits on the master branch that touched README to the working tree and index, so the result can be inspected and made into a single new commit if suitable. The following sequence attempts to backport a patch, bails out because the code the patch applies to has changed too much, and then tries again, this time exercising more care about matching up context lines. ``` $ git cherry-pick topic^ (1) $ git diff (2) $ git reset --merge ORIG_HEAD (3) $ git cherry-pick -Xpatience topic^ (4) ``` 1. apply the change that would be shown by `git show topic^`. In this example, the patch does not apply cleanly, so information about the conflict is written to the index and working tree and no new commit results. 2. summarize changes to be reconciled 3. cancel the cherry-pick. In other words, return to the pre-cherry-pick state, preserving any local modifications you had in the working tree. 4. try to apply the change introduced by `topic^` again, spending extra time to avoid mistakes based on incorrectly matching context lines. See also -------- [git-revert[1]](git-revert) rails module ActionMailer module ActionMailer ==================== gem\_version() Show source ``` # File actionmailer/lib/action_mailer/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Action Mailer as a `Gem::Version`. version() Show source ``` # File actionmailer/lib/action_mailer/version.rb, line 8 def self.version gem_version end ``` Returns the version of the currently loaded Action Mailer as a `Gem::Version`. rails module Rails module Rails ============= app\_class[RW] application[W] cache[RW] logger[RW] application() Show source ``` # File railties/lib/rails.rb, line 39 def application @application ||= (app_class.instance if app_class) end ``` autoloaders() Show source ``` # File railties/lib/rails.rb, line 116 def autoloaders application.autoloaders end ``` backtrace\_cleaner() Show source ``` # File railties/lib/rails.rb, line 50 def backtrace_cleaner @backtrace_cleaner ||= begin # Relies on Active Support, so we have to lazy load to postpone definition until Active Support has been loaded require "rails/backtrace_cleaner" Rails::BacktraceCleaner.new end end ``` configuration() Show source ``` # File railties/lib/rails.rb, line 46 def configuration application.config end ``` The `Configuration` instance used to configure the `Rails` environment env() Show source ``` # File railties/lib/rails.rb, line 72 def env @_env ||= ActiveSupport::EnvironmentInquirer.new(ENV["RAILS_ENV"].presence || ENV["RACK_ENV"].presence || "development") end ``` Returns the current `Rails` environment. ``` Rails.env # => "development" Rails.env.development? # => true Rails.env.production? # => false ``` env=(environment) Show source ``` # File railties/lib/rails.rb, line 79 def env=(environment) @_env = ActiveSupport::EnvironmentInquirer.new(environment) end ``` Sets the `Rails` environment. ``` Rails.env = "staging" # => "staging" ``` error() Show source ``` # File railties/lib/rails.rb, line 83 def error application && application.executor.error_reporter end ``` gem\_version() Show source ``` # File railties/lib/rails/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded `Rails` as a `Gem::Version` groups(\*groups) Show source ``` # File railties/lib/rails.rb, line 96 def groups(*groups) hash = groups.extract_options! env = Rails.env groups.unshift(:default, env) groups.concat ENV["RAILS_GROUPS"].to_s.split(",") groups.concat hash.map { |k, v| k if v.map(&:to_s).include?(env) } groups.compact! groups.uniq! groups end ``` Returns all `Rails` groups for loading based on: * The `Rails` environment; * The environment variable RAILS\_GROUPS; * The optional envs given as argument and the hash with group dependencies; ``` Rails.groups assets: [:development, :test] # => [:default, "development", :assets] for Rails.env == "development" # => [:default, "production"] for Rails.env == "production" ``` public\_path() Show source ``` # File railties/lib/rails.rb, line 112 def public_path application && Pathname.new(application.paths["public"].first) end ``` Returns a [`Pathname`](pathname) object of the public folder of the current `Rails` project, otherwise it returns `nil` if there is no project: ``` Rails.public_path # => #<Pathname:/Users/someuser/some/path/project/public> ``` root() Show source ``` # File railties/lib/rails.rb, line 63 def root application && application.config.root end ``` Returns a [`Pathname`](pathname) object of the current `Rails` project, otherwise it returns `nil` if there is no project: ``` Rails.root # => #<Pathname:/Users/someuser/some/path/project> ``` version() Show source ``` # File railties/lib/rails/version.rb, line 7 def self.version VERSION::STRING end ``` Returns the version of the currently loaded `Rails` as a string. rails class Numeric class Numeric ============== Parent: [Object](object) EXABYTE GIGABYTE KILOBYTE MEGABYTE PETABYTE TERABYTE byte() Alias for: [bytes](numeric#method-i-bytes) bytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 14 def bytes self end ``` Enables the use of byte calculations and declarations, like 45.bytes + 2.6.megabytes ``` 2.bytes # => 2 ``` Also aliased as: [byte](numeric#method-i-byte) day() Alias for: [days](numeric#method-i-days) days() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 37 def days ActiveSupport::Duration.days(self) end ``` Returns a Duration instance matching the number of days provided. ``` 2.days # => 2 days ``` Also aliased as: [day](numeric#method-i-day) exabyte() Alias for: [exabytes](numeric#method-i-exabytes) exabytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 62 def exabytes self * EXABYTE end ``` Returns the number of bytes equivalent to the exabytes provided. ``` 2.exabytes # => 2_305_843_009_213_693_952 ``` Also aliased as: [exabyte](numeric#method-i-exabyte) fortnight() Alias for: [fortnights](numeric#method-i-fortnights) fortnights() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 53 def fortnights ActiveSupport::Duration.weeks(self * 2) end ``` Returns a Duration instance matching the number of fortnights provided. ``` 2.fortnights # => 4 weeks ``` Also aliased as: [fortnight](numeric#method-i-fortnight) gigabyte() Alias for: [gigabytes](numeric#method-i-gigabytes) gigabytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 38 def gigabytes self * GIGABYTE end ``` Returns the number of bytes equivalent to the gigabytes provided. ``` 2.gigabytes # => 2_147_483_648 ``` Also aliased as: [gigabyte](numeric#method-i-gigabyte) hour() Alias for: [hours](numeric#method-i-hours) hours() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 29 def hours ActiveSupport::Duration.hours(self) end ``` Returns a Duration instance matching the number of hours provided. ``` 2.hours # => 2 hours ``` Also aliased as: [hour](numeric#method-i-hour) html\_safe?() Show source ``` # File activesupport/lib/active_support/core_ext/string/output_safety.rb, line 128 def html_safe? true end ``` in\_milliseconds() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 63 def in_milliseconds self * 1000 end ``` Returns the number of milliseconds equivalent to the seconds provided. Used with the standard time durations. ``` 2.in_milliseconds # => 2000 1.hour.in_milliseconds # => 3600000 ``` kilobyte() Alias for: [kilobytes](numeric#method-i-kilobytes) kilobytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 22 def kilobytes self * KILOBYTE end ``` Returns the number of bytes equivalent to the kilobytes provided. ``` 2.kilobytes # => 2048 ``` Also aliased as: [kilobyte](numeric#method-i-kilobyte) megabyte() Alias for: [megabytes](numeric#method-i-megabytes) megabytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 30 def megabytes self * MEGABYTE end ``` Returns the number of bytes equivalent to the megabytes provided. ``` 2.megabytes # => 2_097_152 ``` Also aliased as: [megabyte](numeric#method-i-megabyte) minute() Alias for: [minutes](numeric#method-i-minutes) minutes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 21 def minutes ActiveSupport::Duration.minutes(self) end ``` Returns a Duration instance matching the number of minutes provided. ``` 2.minutes # => 2 minutes ``` Also aliased as: [minute](numeric#method-i-minute) petabyte() Alias for: [petabytes](numeric#method-i-petabytes) petabytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 54 def petabytes self * PETABYTE end ``` Returns the number of bytes equivalent to the petabytes provided. ``` 2.petabytes # => 2_251_799_813_685_248 ``` Also aliased as: [petabyte](numeric#method-i-petabyte) second() Alias for: [seconds](numeric#method-i-seconds) seconds() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 13 def seconds ActiveSupport::Duration.seconds(self) end ``` Returns a Duration instance matching the number of seconds provided. ``` 2.seconds # => 2 seconds ``` Also aliased as: [second](numeric#method-i-second) terabyte() Alias for: [terabytes](numeric#method-i-terabytes) terabytes() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/bytes.rb, line 46 def terabytes self * TERABYTE end ``` Returns the number of bytes equivalent to the terabytes provided. ``` 2.terabytes # => 2_199_023_255_552 ``` Also aliased as: [terabyte](numeric#method-i-terabyte) week() Alias for: [weeks](numeric#method-i-weeks) weeks() Show source ``` # File activesupport/lib/active_support/core_ext/numeric/time.rb, line 45 def weeks ActiveSupport::Duration.weeks(self) end ``` Returns a Duration instance matching the number of weeks provided. ``` 2.weeks # => 2 weeks ``` Also aliased as: [week](numeric#method-i-week) rails module SecureRandom module SecureRandom ==================== BASE36\_ALPHABET BASE58\_ALPHABET base36(n = 16) Show source ``` # File activesupport/lib/active_support/core_ext/securerandom.rb, line 38 def self.base36(n = 16) SecureRandom.random_bytes(n).unpack("C*").map do |byte| idx = byte % 64 idx = SecureRandom.random_number(36) if idx >= 36 BASE36_ALPHABET[idx] end.join end ``` [`SecureRandom.base36`](securerandom#method-c-base36) generates a random base36 string in lowercase. The argument *n* specifies the length of the random string to be generated. If *n* is not specified or is `nil`, 16 is assumed. It may be larger in the future. This method can be used over `base58` if a deterministic case key is necessary. The result will contain alphanumeric characters in lowercase. ``` p SecureRandom.base36 # => "4kugl2pdqmscqtje" p SecureRandom.base36(24) # => "77tmhrhjfvfdwodq8w7ev2m7" ``` base58(n = 16) Show source ``` # File activesupport/lib/active_support/core_ext/securerandom.rb, line 19 def self.base58(n = 16) SecureRandom.random_bytes(n).unpack("C*").map do |byte| idx = byte % 64 idx = SecureRandom.random_number(58) if idx >= 58 BASE58_ALPHABET[idx] end.join end ``` [`SecureRandom.base58`](securerandom#method-c-base58) generates a random base58 string. The argument *n* specifies the length of the random string to be generated. If *n* is not specified or is `nil`, 16 is assumed. It may be larger in the future. The result may contain alphanumeric characters except 0, O, I and l. ``` p SecureRandom.base58 # => "4kUgL2pdQMSCQtjE" p SecureRandom.base58(24) # => "77TMHrHJFvFDwodq8w7Ev2m7" ```
programming_docs
rails class Integer class Integer ============== Parent: [Object](object) month() Alias for: [months](integer#method-i-months) months() Show source ``` # File activesupport/lib/active_support/core_ext/integer/time.rb, line 10 def months ActiveSupport::Duration.months(self) end ``` Returns a Duration instance matching the number of months provided. ``` 2.months # => 2 months ``` Also aliased as: [month](integer#method-i-month) multiple\_of?(number) Show source ``` # File activesupport/lib/active_support/core_ext/integer/multiple.rb, line 9 def multiple_of?(number) number == 0 ? self == 0 : self % number == 0 end ``` Check whether the integer is evenly divisible by the argument. ``` 0.multiple_of?(0) # => true 6.multiple_of?(5) # => false 10.multiple_of?(2) # => true ``` ordinal() Show source ``` # File activesupport/lib/active_support/core_ext/integer/inflections.rb, line 28 def ordinal ActiveSupport::Inflector.ordinal(self) end ``` Ordinal returns the suffix used to denote the position in an ordered sequence such as 1st, 2nd, 3rd, 4th. ``` 1.ordinal # => "st" 2.ordinal # => "nd" 1002.ordinal # => "nd" 1003.ordinal # => "rd" -11.ordinal # => "th" -1001.ordinal # => "st" ``` ordinalize() Show source ``` # File activesupport/lib/active_support/core_ext/integer/inflections.rb, line 15 def ordinalize ActiveSupport::Inflector.ordinalize(self) end ``` Ordinalize turns a number into an ordinal string used to denote the position in an ordered sequence such as 1st, 2nd, 3rd, 4th. ``` 1.ordinalize # => "1st" 2.ordinalize # => "2nd" 1002.ordinalize # => "1002nd" 1003.ordinalize # => "1003rd" -11.ordinalize # => "-11th" -1001.ordinalize # => "-1001st" ``` year() Alias for: [years](integer#method-i-years) years() Show source ``` # File activesupport/lib/active_support/core_ext/integer/time.rb, line 18 def years ActiveSupport::Duration.years(self) end ``` Returns a Duration instance matching the number of years provided. ``` 2.years # => 2 years ``` Also aliased as: [year](integer#method-i-year) rails class Method class Method ============= Parent: [Object](object) duplicable?() Show source ``` # File activesupport/lib/active_support/core_ext/object/duplicable.rb, line 36 def duplicable? false end ``` Methods are not duplicable: ``` method(:puts).duplicable? # => false method(:puts).dup # => TypeError: allocator undefined for Method ``` rails Ruby on Rails Ruby on Rails ============= * [AbstractController::ActionNotFound](abstractcontroller/actionnotfound) * [AbstractController::Base](abstractcontroller/base) * [AbstractController::Caching](abstractcontroller/caching) * [AbstractController::Caching::Fragments](abstractcontroller/caching/fragments) * [AbstractController::Caching::Fragments::ClassMethods](abstractcontroller/caching/fragments/classmethods) * [AbstractController::Callbacks](abstractcontroller/callbacks) * [AbstractController::Callbacks::ClassMethods](abstractcontroller/callbacks/classmethods) * [AbstractController::Helpers::ClassMethods](abstractcontroller/helpers/classmethods) * [AbstractController::Rendering](abstractcontroller/rendering) * [AbstractController::Translation](abstractcontroller/translation) * [AbstractController::UrlFor](abstractcontroller/urlfor) * [ActionCable](actioncable) * [ActionCable::Channel::Base](actioncable/channel/base) * [ActionCable::Channel::Broadcasting::ClassMethods](actioncable/channel/broadcasting/classmethods) * [ActionCable::Channel::ChannelStub](actioncable/channel/channelstub) * [ActionCable::Channel::Naming::ClassMethods](actioncable/channel/naming/classmethods) * [ActionCable::Channel::PeriodicTimers::ClassMethods](actioncable/channel/periodictimers/classmethods) * [ActionCable::Channel::Streams](actioncable/channel/streams) * [ActionCable::Channel::TestCase](actioncable/channel/testcase) * [ActionCable::Channel::TestCase::Behavior](actioncable/channel/testcase/behavior) * [ActionCable::Connection::Assertions](actioncable/connection/assertions) * [ActionCable::Connection::Authorization](actioncable/connection/authorization) * [ActionCable::Connection::Base](actioncable/connection/base) * [ActionCable::Connection::Identification](actioncable/connection/identification) * [ActionCable::Connection::Identification::ClassMethods](actioncable/connection/identification/classmethods) * [ActionCable::Connection::InternalChannel](actioncable/connection/internalchannel) * [ActionCable::Connection::TaggedLoggerProxy](actioncable/connection/taggedloggerproxy) * [ActionCable::Connection::TestCase](actioncable/connection/testcase) * [ActionCable::Connection::TestCase::Behavior](actioncable/connection/testcase/behavior) * [ActionCable::Connection::TestCookieJar](actioncable/connection/testcookiejar) * [ActionCable::Helpers::ActionCableHelper](actioncable/helpers/actioncablehelper) * [ActionCable::RemoteConnections](actioncable/remoteconnections) * [ActionCable::RemoteConnections::RemoteConnection](actioncable/remoteconnections/remoteconnection) * [ActionCable::Server::Base](actioncable/server/base) * [ActionCable::Server::Broadcasting](actioncable/server/broadcasting) * [ActionCable::Server::Configuration](actioncable/server/configuration) * [ActionCable::SubscriptionAdapter::Test](actioncable/subscriptionadapter/test) * [ActionCable::TestHelper](actioncable/testhelper) * [ActionController](actioncontroller) * [ActionController::API](actioncontroller/api) * [ActionController::Base](actioncontroller/base) * [ActionController::Caching](actioncontroller/caching) * [ActionController::ConditionalGet](actioncontroller/conditionalget) * [ActionController::ConditionalGet::ClassMethods](actioncontroller/conditionalget/classmethods) * [ActionController::Cookies](actioncontroller/cookies) * [ActionController::DataStreaming](actioncontroller/datastreaming) * [ActionController::DefaultHeaders](actioncontroller/defaultheaders) * [ActionController::EtagWithFlash](actioncontroller/etagwithflash) * [ActionController::EtagWithTemplateDigest](actioncontroller/etagwithtemplatedigest) * [ActionController::Flash::ClassMethods](actioncontroller/flash/classmethods) * [ActionController::FormBuilder](actioncontroller/formbuilder) * [ActionController::FormBuilder::ClassMethods](actioncontroller/formbuilder/classmethods) * [ActionController::Head](actioncontroller/head) * [ActionController::Helpers](actioncontroller/helpers) * [ActionController::Helpers::ClassMethods](actioncontroller/helpers/classmethods) * [ActionController::HttpAuthentication](actioncontroller/httpauthentication) * [ActionController::HttpAuthentication::Basic](actioncontroller/httpauthentication/basic) * [ActionController::HttpAuthentication::Digest](actioncontroller/httpauthentication/digest) * [ActionController::HttpAuthentication::Digest::ControllerMethods](actioncontroller/httpauthentication/digest/controllermethods) * [ActionController::HttpAuthentication::Token](actioncontroller/httpauthentication/token) * [ActionController::ImplicitRender](actioncontroller/implicitrender) * [ActionController::Live](actioncontroller/live) * [ActionController::Live::SSE](actioncontroller/live/sse) * [ActionController::Logging::ClassMethods](actioncontroller/logging/classmethods) * [ActionController::Metal](actioncontroller/metal) * [ActionController::MimeResponds](actioncontroller/mimeresponds) * [ActionController::MimeResponds::Collector](actioncontroller/mimeresponds/collector) * [ActionController::MissingRenderer](actioncontroller/missingrenderer) * [ActionController::ParameterEncoding](actioncontroller/parameterencoding) * [ActionController::ParameterEncoding::ClassMethods](actioncontroller/parameterencoding/classmethods) * [ActionController::ParameterMissing](actioncontroller/parametermissing) * [ActionController::Parameters](actioncontroller/parameters) * [ActionController::ParamsWrapper](actioncontroller/paramswrapper) * [ActionController::ParamsWrapper::Options::ClassMethods](actioncontroller/paramswrapper/options/classmethods) * [ActionController::PermissionsPolicy](actioncontroller/permissionspolicy) * [ActionController::Redirecting](actioncontroller/redirecting) * [ActionController::Renderer](actioncontroller/renderer) * [ActionController::Renderers](actioncontroller/renderers) * [ActionController::Renderers::All](actioncontroller/renderers/all) * [ActionController::Renderers::ClassMethods](actioncontroller/renderers/classmethods) * [ActionController::Rendering::ClassMethods](actioncontroller/rendering/classmethods) * [ActionController::RequestForgeryProtection](actioncontroller/requestforgeryprotection) * [ActionController::RequestForgeryProtection::ClassMethods](actioncontroller/requestforgeryprotection/classmethods) * [ActionController::RequestForgeryProtection::ProtectionMethods::NullSession](actioncontroller/requestforgeryprotection/protectionmethods/nullsession) * [ActionController::Rescue](actioncontroller/rescue) * [ActionController::RespondToMismatchError](actioncontroller/respondtomismatcherror) * [ActionController::Streaming](actioncontroller/streaming) * [ActionController::StrongParameters](actioncontroller/strongparameters) * [ActionController::TestCase](actioncontroller/testcase) * [ActionController::TestCase::Behavior](actioncontroller/testcase/behavior) * [ActionController::TestCase::Behavior::ClassMethods](actioncontroller/testcase/behavior/classmethods) * [ActionController::UnfilteredParameters](actioncontroller/unfilteredparameters) * [ActionController::UnpermittedParameters](actioncontroller/unpermittedparameters) * [ActionController::UrlFor](actioncontroller/urlfor) * [ActionDispatch::AssertionResponse](actiondispatch/assertionresponse) * [ActionDispatch::Assertions::ResponseAssertions](actiondispatch/assertions/responseassertions) * [ActionDispatch::Assertions::RoutingAssertions](actiondispatch/assertions/routingassertions) * [ActionDispatch::Callbacks](actiondispatch/callbacks) * [ActionDispatch::Cookies](actiondispatch/cookies) * [ActionDispatch::Cookies::ChainedCookieJars](actiondispatch/cookies/chainedcookiejars) * [ActionDispatch::DebugLocks](actiondispatch/debuglocks) * [ActionDispatch::FileHandler](actiondispatch/filehandler) * [ActionDispatch::Flash](actiondispatch/flash) * [ActionDispatch::Flash::FlashHash](actiondispatch/flash/flashhash) * [ActionDispatch::Flash::RequestMethods](actiondispatch/flash/requestmethods) * [ActionDispatch::HostAuthorization](actiondispatch/hostauthorization) * [ActionDispatch::Http::Cache::Request](actiondispatch/http/cache/request) * [ActionDispatch::Http::Cache::Response](actiondispatch/http/cache/response) * [ActionDispatch::Http::FilterParameters](actiondispatch/http/filterparameters) * [ActionDispatch::Http::Headers](actiondispatch/http/headers) * [ActionDispatch::Http::MimeNegotiation](actiondispatch/http/mimenegotiation) * [ActionDispatch::Http::Parameters](actiondispatch/http/parameters) * [ActionDispatch::Http::Parameters::ClassMethods](actiondispatch/http/parameters/classmethods) * [ActionDispatch::Http::Parameters::ParseError](actiondispatch/http/parameters/parseerror) * [ActionDispatch::Http::URL](actiondispatch/http/url) * [ActionDispatch::Http::UploadedFile](actiondispatch/http/uploadedfile) * [ActionDispatch::Integration::RequestHelpers](actiondispatch/integration/requesthelpers) * [ActionDispatch::Integration::Runner](actiondispatch/integration/runner) * [ActionDispatch::Integration::Session](actiondispatch/integration/session) * [ActionDispatch::IntegrationTest](actiondispatch/integrationtest) * [ActionDispatch::MiddlewareStack](actiondispatch/middlewarestack) * [ActionDispatch::MiddlewareStack::InstrumentationProxy](actiondispatch/middlewarestack/instrumentationproxy) * [ActionDispatch::PermissionsPolicy::Middleware](actiondispatch/permissionspolicy/middleware) * [ActionDispatch::PublicExceptions](actiondispatch/publicexceptions) * [ActionDispatch::RemoteIp](actiondispatch/remoteip) * [ActionDispatch::RemoteIp::GetIp](actiondispatch/remoteip/getip) * [ActionDispatch::Request](actiondispatch/request) * [ActionDispatch::RequestId](actiondispatch/requestid) * [ActionDispatch::Response](actiondispatch/response) * [ActionDispatch::Routing](actiondispatch/routing) * [ActionDispatch::Routing::Mapper](actiondispatch/routing/mapper) * [ActionDispatch::Routing::Mapper::Base](actiondispatch/routing/mapper/base) * [ActionDispatch::Routing::Mapper::Concerns](actiondispatch/routing/mapper/concerns) * [ActionDispatch::Routing::Mapper::CustomUrls](actiondispatch/routing/mapper/customurls) * [ActionDispatch::Routing::Mapper::HttpHelpers](actiondispatch/routing/mapper/httphelpers) * [ActionDispatch::Routing::Mapper::Resources](actiondispatch/routing/mapper/resources) * [ActionDispatch::Routing::Mapper::Scoping](actiondispatch/routing/mapper/scoping) * [ActionDispatch::Routing::PolymorphicRoutes](actiondispatch/routing/polymorphicroutes) * [ActionDispatch::Routing::Redirection](actiondispatch/routing/redirection) * [ActionDispatch::Routing::UrlFor](actiondispatch/routing/urlfor) * [ActionDispatch::SSL](actiondispatch/ssl) * [ActionDispatch::Session::CacheStore](actiondispatch/session/cachestore) * [ActionDispatch::Session::CookieStore](actiondispatch/session/cookiestore) * [ActionDispatch::Session::MemCacheStore](actiondispatch/session/memcachestore) * [ActionDispatch::Static](actiondispatch/static) * [ActionDispatch::SystemTestCase](actiondispatch/systemtestcase) * [ActionDispatch::SystemTesting::TestHelpers::ScreenshotHelper](actiondispatch/systemtesting/testhelpers/screenshothelper) * [ActionDispatch::TestProcess::FixtureFile](actiondispatch/testprocess/fixturefile) * [ActionDispatch::TestRequest](actiondispatch/testrequest) * [ActionDispatch::TestResponse](actiondispatch/testresponse) * [ActionMailbox](actionmailbox) * [ActionMailbox::Base](actionmailbox/base) * [ActionMailbox::BaseController](actionmailbox/basecontroller) * [ActionMailbox::Callbacks](actionmailbox/callbacks) * [ActionMailbox::InboundEmail](actionmailbox/inboundemail) * [ActionMailbox::InboundEmail::Incineratable](actionmailbox/inboundemail/incineratable) * [ActionMailbox::InboundEmail::Incineratable::Incineration](actionmailbox/inboundemail/incineratable/incineration) * [ActionMailbox::InboundEmail::MessageId](actionmailbox/inboundemail/messageid) * [ActionMailbox::InboundEmail::Routable](actionmailbox/inboundemail/routable) * [ActionMailbox::IncinerationJob](actionmailbox/incinerationjob) * [ActionMailbox::Ingresses::Mailgun::InboundEmailsController](actionmailbox/ingresses/mailgun/inboundemailscontroller) * [ActionMailbox::Ingresses::Mandrill::InboundEmailsController](actionmailbox/ingresses/mandrill/inboundemailscontroller) * [ActionMailbox::Ingresses::Postmark::InboundEmailsController](actionmailbox/ingresses/postmark/inboundemailscontroller) * [ActionMailbox::Ingresses::Relay::InboundEmailsController](actionmailbox/ingresses/relay/inboundemailscontroller) * [ActionMailbox::Ingresses::Sendgrid::InboundEmailsController](actionmailbox/ingresses/sendgrid/inboundemailscontroller) * [ActionMailbox::Router](actionmailbox/router) * [ActionMailbox::Router::Route](actionmailbox/router/route) * [ActionMailbox::Routing](actionmailbox/routing) * [ActionMailbox::RoutingJob](actionmailbox/routingjob) * [ActionMailbox::TestHelper](actionmailbox/testhelper) * [ActionMailer](actionmailer) * [ActionMailer::Base](actionmailer/base) * [ActionMailer::DeliveryMethods](actionmailer/deliverymethods) * [ActionMailer::DeliveryMethods::ClassMethods](actionmailer/deliverymethods/classmethods) * [ActionMailer::InlinePreviewInterceptor](actionmailer/inlinepreviewinterceptor) * [ActionMailer::LogSubscriber](actionmailer/logsubscriber) * [ActionMailer::MailHelper](actionmailer/mailhelper) * [ActionMailer::MessageDelivery](actionmailer/messagedelivery) * [ActionMailer::Parameterized](actionmailer/parameterized) * [ActionMailer::Parameterized::ClassMethods](actionmailer/parameterized/classmethods) * [ActionMailer::Preview](actionmailer/preview) * [ActionMailer::Previews::ClassMethods](actionmailer/previews/classmethods) * [ActionMailer::Rescuable](actionmailer/rescuable) * [ActionMailer::TestHelper](actionmailer/testhelper) * [ActionText](actiontext) * [ActionText::Attribute](actiontext/attribute) * [ActionText::FixtureSet](actiontext/fixtureset) * [ActionText::RichText](actiontext/richtext) * [ActionText::SystemTestHelper](actiontext/systemtesthelper) * [ActionText::TagHelper](actiontext/taghelper) * [ActionView](actionview) * [ActionView::Base](actionview/base) * [ActionView::Context](actionview/context) * [ActionView::Digestor](actionview/digestor) * [ActionView::FileSystemResolver](actionview/filesystemresolver) * [ActionView::Helpers::AssetTagHelper](actionview/helpers/assettaghelper) * [ActionView::Helpers::AssetUrlHelper](actionview/helpers/asseturlhelper) * [ActionView::Helpers::AtomFeedHelper](actionview/helpers/atomfeedhelper) * [ActionView::Helpers::CacheHelper](actionview/helpers/cachehelper) * [ActionView::Helpers::CaptureHelper](actionview/helpers/capturehelper) * [ActionView::Helpers::CspHelper](actionview/helpers/csphelper) * [ActionView::Helpers::CsrfHelper](actionview/helpers/csrfhelper) * [ActionView::Helpers::DateHelper](actionview/helpers/datehelper) * [ActionView::Helpers::DebugHelper](actionview/helpers/debughelper) * [ActionView::Helpers::FormBuilder](actionview/helpers/formbuilder) * [ActionView::Helpers::FormHelper](actionview/helpers/formhelper) * [ActionView::Helpers::FormOptionsHelper](actionview/helpers/formoptionshelper) * [ActionView::Helpers::FormTagHelper](actionview/helpers/formtaghelper) * [ActionView::Helpers::JavaScriptHelper](actionview/helpers/javascripthelper) * [ActionView::Helpers::NumberHelper](actionview/helpers/numberhelper) * [ActionView::Helpers::NumberHelper::InvalidNumberError](actionview/helpers/numberhelper/invalidnumbererror) * [ActionView::Helpers::OutputSafetyHelper](actionview/helpers/outputsafetyhelper) * [ActionView::Helpers::RenderingHelper](actionview/helpers/renderinghelper) * [ActionView::Helpers::SanitizeHelper](actionview/helpers/sanitizehelper) * [ActionView::Helpers::TagHelper](actionview/helpers/taghelper) * [ActionView::Helpers::TextHelper](actionview/helpers/texthelper) * [ActionView::Helpers::TranslationHelper](actionview/helpers/translationhelper) * [ActionView::Helpers::UrlHelper](actionview/helpers/urlhelper) * [ActionView::Layouts](actionview/layouts) * [ActionView::Layouts::ClassMethods](actionview/layouts/classmethods) * [ActionView::PartialIteration](actionview/partialiteration) * [ActionView::PartialRenderer](actionview/partialrenderer) * [ActionView::RecordIdentifier](actionview/recordidentifier) * [ActionView::Renderer](actionview/renderer) * [ActionView::Rendering](actionview/rendering) * [ActionView::RoutingUrlFor](actionview/routingurlfor) * [ActionView::Template](actionview/template) * [ActionView::TemplatePath](actionview/templatepath) * [ActionView::TestCase::TestController](actionview/testcase/testcontroller) * [ActionView::ViewPaths](actionview/viewpaths) * [ActionView::ViewPaths::ClassMethods](actionview/viewpaths/classmethods) * [ActiveJob](activejob) * [ActiveJob::Arguments](activejob/arguments) * [ActiveJob::Base](activejob/base) * [ActiveJob::Callbacks](activejob/callbacks) * [ActiveJob::Callbacks::ClassMethods](activejob/callbacks/classmethods) * [ActiveJob::Core](activejob/core) * [ActiveJob::Core::ClassMethods](activejob/core/classmethods) * [ActiveJob::DeserializationError](activejob/deserializationerror) * [ActiveJob::EnqueueError](activejob/enqueueerror) * [ActiveJob::Enqueuing](activejob/enqueuing) * [ActiveJob::Enqueuing::ClassMethods](activejob/enqueuing/classmethods) * [ActiveJob::Exceptions](activejob/exceptions) * [ActiveJob::Exceptions::ClassMethods](activejob/exceptions/classmethods) * [ActiveJob::Execution](activejob/execution) * [ActiveJob::Execution::ClassMethods](activejob/execution/classmethods) * [ActiveJob::QueueAdapter::ClassMethods](activejob/queueadapter/classmethods) * [ActiveJob::QueueAdapters](activejob/queueadapters) * [ActiveJob::QueueAdapters::AsyncAdapter](activejob/queueadapters/asyncadapter) * [ActiveJob::QueueAdapters::BackburnerAdapter](activejob/queueadapters/backburneradapter) * [ActiveJob::QueueAdapters::DelayedJobAdapter](activejob/queueadapters/delayedjobadapter) * [ActiveJob::QueueAdapters::InlineAdapter](activejob/queueadapters/inlineadapter) * [ActiveJob::QueueAdapters::QueAdapter](activejob/queueadapters/queadapter) * [ActiveJob::QueueAdapters::QueueClassicAdapter](activejob/queueadapters/queueclassicadapter) * [ActiveJob::QueueAdapters::ResqueAdapter](activejob/queueadapters/resqueadapter) * [ActiveJob::QueueAdapters::SidekiqAdapter](activejob/queueadapters/sidekiqadapter) * [ActiveJob::QueueAdapters::SneakersAdapter](activejob/queueadapters/sneakersadapter) * [ActiveJob::QueueAdapters::SuckerPunchAdapter](activejob/queueadapters/suckerpunchadapter) * [ActiveJob::QueueAdapters::TestAdapter](activejob/queueadapters/testadapter) * [ActiveJob::QueueName](activejob/queuename) * [ActiveJob::QueueName::ClassMethods](activejob/queuename/classmethods) * [ActiveJob::QueuePriority](activejob/queuepriority) * [ActiveJob::QueuePriority::ClassMethods](activejob/queuepriority/classmethods) * [ActiveJob::SerializationError](activejob/serializationerror) * [ActiveJob::Serializers::ObjectSerializer](activejob/serializers/objectserializer) * [ActiveJob::TestHelper](activejob/testhelper) * [ActiveModel](activemodel) * [ActiveModel::API](activemodel/api) * [ActiveModel::AttributeAssignment](activemodel/attributeassignment) * [ActiveModel::AttributeMethods](activemodel/attributemethods) * [ActiveModel::AttributeMethods::ClassMethods](activemodel/attributemethods/classmethods) * [ActiveModel::Attributes::ClassMethods](activemodel/attributes/classmethods) * [ActiveModel::Callbacks](activemodel/callbacks) * [ActiveModel::Conversion](activemodel/conversion) * [ActiveModel::Dirty](activemodel/dirty) * [ActiveModel::EachValidator](activemodel/eachvalidator) * [ActiveModel::Error](activemodel/error) * [ActiveModel::Errors](activemodel/errors) * [ActiveModel::ForbiddenAttributesError](activemodel/forbiddenattributeserror) * [ActiveModel::Lint::Tests](activemodel/lint/tests) * [ActiveModel::MissingAttributeError](activemodel/missingattributeerror) * [ActiveModel::Model](activemodel/model) * [ActiveModel::Name](activemodel/name) * [ActiveModel::Naming](activemodel/naming) * [ActiveModel::RangeError](activemodel/rangeerror) * [ActiveModel::SecurePassword](activemodel/securepassword) * [ActiveModel::SecurePassword::ClassMethods](activemodel/securepassword/classmethods) * [ActiveModel::Serialization](activemodel/serialization) * [ActiveModel::Serializers::JSON](activemodel/serializers/json) * [ActiveModel::StrictValidationFailed](activemodel/strictvalidationfailed) * [ActiveModel::Translation](activemodel/translation) * [ActiveModel::Type](activemodel/type) * [ActiveModel::Type::Boolean](activemodel/type/boolean) * [ActiveModel::Type::Value](activemodel/type/value) * [ActiveModel::UnknownAttributeError](activemodel/unknownattributeerror) * [ActiveModel::ValidationError](activemodel/validationerror) * [ActiveModel::Validations](activemodel/validations) * [ActiveModel::Validations::Callbacks](activemodel/validations/callbacks) * [ActiveModel::Validations::Callbacks::ClassMethods](activemodel/validations/callbacks/classmethods) * [ActiveModel::Validations::ClassMethods](activemodel/validations/classmethods) * [ActiveModel::Validations::HelperMethods](activemodel/validations/helpermethods) * [ActiveModel::Validator](activemodel/validator) * [ActiveRecord](activerecord) * [ActiveRecord::ActiveJobRequiredError](activerecord/activejobrequirederror) * [ActiveRecord::ActiveRecordError](activerecord/activerecorderror) * [ActiveRecord::AdapterNotFound](activerecord/adapternotfound) * [ActiveRecord::AdapterNotSpecified](activerecord/adapternotspecified) * [ActiveRecord::AdapterTimeout](activerecord/adaptertimeout) * [ActiveRecord::Aggregations](activerecord/aggregations) * [ActiveRecord::Aggregations::ClassMethods](activerecord/aggregations/classmethods) * [ActiveRecord::AssociationTypeMismatch](activerecord/associationtypemismatch) * [ActiveRecord::Associations::ClassMethods](activerecord/associations/classmethods) * [ActiveRecord::Associations::CollectionProxy](activerecord/associations/collectionproxy) * [ActiveRecord::AsynchronousQueryInsideTransactionError](activerecord/asynchronousqueryinsidetransactionerror) * [ActiveRecord::AttributeAssignmentError](activerecord/attributeassignmenterror) * [ActiveRecord::AttributeMethods](activerecord/attributemethods) * [ActiveRecord::AttributeMethods::BeforeTypeCast](activerecord/attributemethods/beforetypecast) * [ActiveRecord::AttributeMethods::ClassMethods](activerecord/attributemethods/classmethods) * [ActiveRecord::AttributeMethods::Dirty](activerecord/attributemethods/dirty) * [ActiveRecord::AttributeMethods::PrimaryKey](activerecord/attributemethods/primarykey) * [ActiveRecord::AttributeMethods::PrimaryKey::ClassMethods](activerecord/attributemethods/primarykey/classmethods) * [ActiveRecord::AttributeMethods::Read](activerecord/attributemethods/read) * [ActiveRecord::AttributeMethods::Serialization::ClassMethods](activerecord/attributemethods/serialization/classmethods) * [ActiveRecord::AttributeMethods::Write](activerecord/attributemethods/write) * [ActiveRecord::Attributes](activerecord/attributes) * [ActiveRecord::Attributes::ClassMethods](activerecord/attributes/classmethods) * [ActiveRecord::AutosaveAssociation](activerecord/autosaveassociation) * [ActiveRecord::Base](activerecord/base) * [ActiveRecord::Batches](activerecord/batches) * [ActiveRecord::Batches::BatchEnumerator](activerecord/batches/batchenumerator) * [ActiveRecord::Calculations](activerecord/calculations) * [ActiveRecord::Callbacks](activerecord/callbacks) * [ActiveRecord::Callbacks::ClassMethods](activerecord/callbacks/classmethods) * [ActiveRecord::ConfigurationError](activerecord/configurationerror) * [ActiveRecord::ConnectionAdapters::AbstractAdapter](activerecord/connectionadapters/abstractadapter) * [ActiveRecord::ConnectionAdapters::AbstractMysqlAdapter](activerecord/connectionadapters/abstractmysqladapter) * [ActiveRecord::ConnectionAdapters::ColumnMethods](activerecord/connectionadapters/columnmethods) * [ActiveRecord::ConnectionAdapters::ConnectionHandler](activerecord/connectionadapters/connectionhandler) * [ActiveRecord::ConnectionAdapters::ConnectionPool](activerecord/connectionadapters/connectionpool) * [ActiveRecord::ConnectionAdapters::ConnectionPool::Queue](activerecord/connectionadapters/connectionpool/queue) * [ActiveRecord::ConnectionAdapters::ConnectionPool::Reaper](activerecord/connectionadapters/connectionpool/reaper) * [ActiveRecord::ConnectionAdapters::DatabaseLimits](activerecord/connectionadapters/databaselimits) * [ActiveRecord::ConnectionAdapters::DatabaseStatements](activerecord/connectionadapters/databasestatements) * [ActiveRecord::ConnectionAdapters::MySQL::DatabaseStatements](activerecord/connectionadapters/mysql/databasestatements) * [ActiveRecord::ConnectionAdapters::Mysql2Adapter](activerecord/connectionadapters/mysql2adapter) * [ActiveRecord::ConnectionAdapters::PostgreSQL::ColumnMethods](activerecord/connectionadapters/postgresql/columnmethods) * [ActiveRecord::ConnectionAdapters::PostgreSQL::DatabaseStatements](activerecord/connectionadapters/postgresql/databasestatements) * [ActiveRecord::ConnectionAdapters::PostgreSQL::Quoting](activerecord/connectionadapters/postgresql/quoting) * [ActiveRecord::ConnectionAdapters::PostgreSQL::SchemaStatements](activerecord/connectionadapters/postgresql/schemastatements) * [ActiveRecord::ConnectionAdapters::PostgreSQLAdapter](activerecord/connectionadapters/postgresqladapter) * [ActiveRecord::ConnectionAdapters::QueryCache](activerecord/connectionadapters/querycache) * [ActiveRecord::ConnectionAdapters::Quoting](activerecord/connectionadapters/quoting) * [ActiveRecord::ConnectionAdapters::SQLite3Adapter](activerecord/connectionadapters/sqlite3adapter) * [ActiveRecord::ConnectionAdapters::SchemaCache](activerecord/connectionadapters/schemacache) * [ActiveRecord::ConnectionAdapters::SchemaStatements](activerecord/connectionadapters/schemastatements) * [ActiveRecord::ConnectionAdapters::Table](activerecord/connectionadapters/table) * [ActiveRecord::ConnectionAdapters::TableDefinition](activerecord/connectionadapters/tabledefinition) * [ActiveRecord::ConnectionHandling](activerecord/connectionhandling) * [ActiveRecord::ConnectionNotEstablished](activerecord/connectionnotestablished) * [ActiveRecord::ConnectionTimeoutError](activerecord/connectiontimeouterror) * [ActiveRecord::Core](activerecord/core) * [ActiveRecord::Core::ClassMethods](activerecord/core/classmethods) * [ActiveRecord::CounterCache::ClassMethods](activerecord/countercache/classmethods) * [ActiveRecord::DangerousAttributeError](activerecord/dangerousattributeerror) * [ActiveRecord::DatabaseAlreadyExists](activerecord/databasealreadyexists) * [ActiveRecord::DatabaseConfigurations](activerecord/databaseconfigurations) * [ActiveRecord::DatabaseConfigurations::HashConfig](activerecord/databaseconfigurations/hashconfig) * [ActiveRecord::DatabaseConfigurations::UrlConfig](activerecord/databaseconfigurations/urlconfig) * [ActiveRecord::DatabaseConnectionError](activerecord/databaseconnectionerror) * [ActiveRecord::Deadlocked](activerecord/deadlocked) * [ActiveRecord::DelegatedType](activerecord/delegatedtype) * [ActiveRecord::DestroyAssociationAsyncJob](activerecord/destroyassociationasyncjob) * [ActiveRecord::EagerLoadPolymorphicError](activerecord/eagerloadpolymorphicerror) * [ActiveRecord::Encryption::Cipher](activerecord/encryption/cipher) * [ActiveRecord::Encryption::Cipher::Aes256Gcm](activerecord/encryption/cipher/aes256gcm) * [ActiveRecord::Encryption::Config](activerecord/encryption/config) * [ActiveRecord::Encryption::Configurable](activerecord/encryption/configurable) * [ActiveRecord::Encryption::Context](activerecord/encryption/context) * [ActiveRecord::Encryption::Contexts](activerecord/encryption/contexts) * [ActiveRecord::Encryption::DerivedSecretKeyProvider](activerecord/encryption/derivedsecretkeyprovider) * [ActiveRecord::Encryption::DeterministicKeyProvider](activerecord/encryption/deterministickeyprovider) * [ActiveRecord::Encryption::EncryptableRecord](activerecord/encryption/encryptablerecord) * [ActiveRecord::Encryption::EncryptedAttributeType](activerecord/encryption/encryptedattributetype) * [ActiveRecord::Encryption::EncryptingOnlyEncryptor](activerecord/encryption/encryptingonlyencryptor) * [ActiveRecord::Encryption::Encryptor](activerecord/encryption/encryptor) * [ActiveRecord::Encryption::EnvelopeEncryptionKeyProvider](activerecord/encryption/envelopeencryptionkeyprovider) * [ActiveRecord::Encryption::ExtendedDeterministicQueries](activerecord/encryption/extendeddeterministicqueries) * [ActiveRecord::Encryption::Key](activerecord/encryption/key) * [ActiveRecord::Encryption::KeyGenerator](activerecord/encryption/keygenerator) * [ActiveRecord::Encryption::KeyProvider](activerecord/encryption/keyprovider) * [ActiveRecord::Encryption::Message](activerecord/encryption/message) * [ActiveRecord::Encryption::MessageSerializer](activerecord/encryption/messageserializer) * [ActiveRecord::Encryption::NullEncryptor](activerecord/encryption/nullencryptor) * [ActiveRecord::Encryption::Properties](activerecord/encryption/properties) * [ActiveRecord::Encryption::ReadOnlyNullEncryptor](activerecord/encryption/readonlynullencryptor) * [ActiveRecord::Encryption::Scheme](activerecord/encryption/scheme) * [ActiveRecord::Enum](activerecord/enum) * [ActiveRecord::ExclusiveConnectionTimeoutError](activerecord/exclusiveconnectiontimeouterror) * [ActiveRecord::FinderMethods](activerecord/findermethods) * [ActiveRecord::FixtureSet](activerecord/fixtureset) * [ActiveRecord::ImmutableRelation](activerecord/immutablerelation) * [ActiveRecord::Inheritance](activerecord/inheritance) * [ActiveRecord::Inheritance::ClassMethods](activerecord/inheritance/classmethods) * [ActiveRecord::Integration](activerecord/integration) * [ActiveRecord::Integration::ClassMethods](activerecord/integration/classmethods) * [ActiveRecord::InvalidForeignKey](activerecord/invalidforeignkey) * [ActiveRecord::IrreversibleMigration](activerecord/irreversiblemigration) * [ActiveRecord::IrreversibleOrderError](activerecord/irreversibleordererror) * [ActiveRecord::LockWaitTimeout](activerecord/lockwaittimeout) * [ActiveRecord::Locking::Optimistic](activerecord/locking/optimistic) * [ActiveRecord::Locking::Optimistic::ClassMethods](activerecord/locking/optimistic/classmethods) * [ActiveRecord::Locking::Pessimistic](activerecord/locking/pessimistic) * [ActiveRecord::Middleware::DatabaseSelector](activerecord/middleware/databaseselector) * [ActiveRecord::Middleware::ShardSelector](activerecord/middleware/shardselector) * [ActiveRecord::Migration](activerecord/migration) * [ActiveRecord::Migration::CheckPending](activerecord/migration/checkpending) * [ActiveRecord::Migration::CommandRecorder](activerecord/migration/commandrecorder) * [ActiveRecord::MigrationContext](activerecord/migrationcontext) * [ActiveRecord::MismatchedForeignKey](activerecord/mismatchedforeignkey) * [ActiveRecord::ModelSchema](activerecord/modelschema) * [ActiveRecord::ModelSchema::ClassMethods](activerecord/modelschema/classmethods) * [ActiveRecord::MultiparameterAssignmentErrors](activerecord/multiparameterassignmenterrors) * [ActiveRecord::NestedAttributes::ClassMethods](activerecord/nestedattributes/classmethods) * [ActiveRecord::NoDatabaseError](activerecord/nodatabaseerror) * [ActiveRecord::NoTouching](activerecord/notouching) * [ActiveRecord::NoTouching::ClassMethods](activerecord/notouching/classmethods) * [ActiveRecord::NotNullViolation](activerecord/notnullviolation) * [ActiveRecord::Persistence](activerecord/persistence) * [ActiveRecord::Persistence::ClassMethods](activerecord/persistence/classmethods) * [ActiveRecord::PreparedStatementCacheExpired](activerecord/preparedstatementcacheexpired) * [ActiveRecord::PreparedStatementInvalid](activerecord/preparedstatementinvalid) * [ActiveRecord::QueryAborted](activerecord/queryaborted) * [ActiveRecord::QueryCache::ClassMethods](activerecord/querycache/classmethods) * [ActiveRecord::QueryCanceled](activerecord/querycanceled) * [ActiveRecord::QueryLogs](activerecord/querylogs) * [ActiveRecord::QueryMethods](activerecord/querymethods) * [ActiveRecord::QueryMethods::WhereChain](activerecord/querymethods/wherechain) * [ActiveRecord::Querying](activerecord/querying) * [ActiveRecord::RangeError](activerecord/rangeerror) * [ActiveRecord::ReadOnlyError](activerecord/readonlyerror) * [ActiveRecord::ReadOnlyRecord](activerecord/readonlyrecord) * [ActiveRecord::ReadonlyAttributes::ClassMethods](activerecord/readonlyattributes/classmethods) * [ActiveRecord::RecordInvalid](activerecord/recordinvalid) * [ActiveRecord::RecordNotDestroyed](activerecord/recordnotdestroyed) * [ActiveRecord::RecordNotFound](activerecord/recordnotfound) * [ActiveRecord::RecordNotSaved](activerecord/recordnotsaved) * [ActiveRecord::RecordNotUnique](activerecord/recordnotunique) * [ActiveRecord::Reflection::ClassMethods](activerecord/reflection/classmethods) * [ActiveRecord::Reflection::MacroReflection](activerecord/reflection/macroreflection) * [ActiveRecord::Relation](activerecord/relation) * [ActiveRecord::Relation::RecordFetchWarning](activerecord/relation/recordfetchwarning) * [ActiveRecord::Result](activerecord/result) * [ActiveRecord::Rollback](activerecord/rollback) * [ActiveRecord::Sanitization::ClassMethods](activerecord/sanitization/classmethods) * [ActiveRecord::Schema](activerecord/schema) * [ActiveRecord::Scoping::Default::ClassMethods](activerecord/scoping/default/classmethods) * [ActiveRecord::Scoping::Named::ClassMethods](activerecord/scoping/named/classmethods) * [ActiveRecord::SecureToken::ClassMethods](activerecord/securetoken/classmethods) * [ActiveRecord::SerializationFailure](activerecord/serializationfailure) * [ActiveRecord::SerializationTypeMismatch](activerecord/serializationtypemismatch) * [ActiveRecord::SignedId](activerecord/signedid) * [ActiveRecord::SignedId::ClassMethods](activerecord/signedid/classmethods) * [ActiveRecord::SoleRecordExceeded](activerecord/solerecordexceeded) * [ActiveRecord::SpawnMethods](activerecord/spawnmethods) * [ActiveRecord::StaleObjectError](activerecord/staleobjecterror) * [ActiveRecord::StatementInvalid](activerecord/statementinvalid) * [ActiveRecord::StatementTimeout](activerecord/statementtimeout) * [ActiveRecord::Store](activerecord/store) * [ActiveRecord::StrictLoadingViolationError](activerecord/strictloadingviolationerror) * [ActiveRecord::SubclassNotFound](activerecord/subclassnotfound) * [ActiveRecord::Suppressor](activerecord/suppressor) * [ActiveRecord::TableNotSpecified](activerecord/tablenotspecified) * [ActiveRecord::TestFixtures::ClassMethods](activerecord/testfixtures/classmethods) * [ActiveRecord::Timestamp](activerecord/timestamp) * [ActiveRecord::TransactionIsolationError](activerecord/transactionisolationerror) * [ActiveRecord::TransactionRollbackError](activerecord/transactionrollbackerror) * [ActiveRecord::Transactions](activerecord/transactions) * [ActiveRecord::Transactions::ClassMethods](activerecord/transactions/classmethods) * [ActiveRecord::Type::Boolean](activemodel/type/boolean) * [ActiveRecord::Type::Value](activemodel/type/value) * [ActiveRecord::UnknownAttributeError](activemodel/unknownattributeerror) * [ActiveRecord::UnknownAttributeReference](activerecord/unknownattributereference) * [ActiveRecord::UnknownPrimaryKey](activerecord/unknownprimarykey) * [ActiveRecord::Validations](activerecord/validations) * [ActiveRecord::Validations::ClassMethods](activerecord/validations/classmethods) * [ActiveRecord::ValueTooLong](activerecord/valuetoolong) * [ActiveRecord::WrappedDatabaseException](activerecord/wrappeddatabaseexception) * [ActiveStorage::AnalyzeJob](activestorage/analyzejob) * [ActiveStorage::Analyzer](activestorage/analyzer) * [ActiveStorage::Analyzer::AudioAnalyzer](activestorage/analyzer/audioanalyzer) * [ActiveStorage::Analyzer::ImageAnalyzer](activestorage/analyzer/imageanalyzer) * [ActiveStorage::Analyzer::ImageAnalyzer::ImageMagick](activestorage/analyzer/imageanalyzer/imagemagick) * [ActiveStorage::Analyzer::ImageAnalyzer::Vips](activestorage/analyzer/imageanalyzer/vips) * [ActiveStorage::Analyzer::VideoAnalyzer](activestorage/analyzer/videoanalyzer) * [ActiveStorage::Attached](activestorage/attached) * [ActiveStorage::Attached::Many](activestorage/attached/many) * [ActiveStorage::Attached::Model](activestorage/attached/model) * [ActiveStorage::Attached::One](activestorage/attached/one) * [ActiveStorage::Attachment](activestorage/attachment) * [ActiveStorage::BaseController](activestorage/basecontroller) * [ActiveStorage::Blob](activestorage/blob) * [ActiveStorage::Blob::Analyzable](activestorage/blob/analyzable) * [ActiveStorage::Blob::Representable](activestorage/blob/representable) * [ActiveStorage::Blobs::ProxyController](activestorage/blobs/proxycontroller) * [ActiveStorage::Blobs::RedirectController](activestorage/blobs/redirectcontroller) * [ActiveStorage::DirectUploadsController](activestorage/directuploadscontroller) * [ActiveStorage::DiskController](activestorage/diskcontroller) * [ActiveStorage::Error](activestorage/error) * [ActiveStorage::FileNotFoundError](activestorage/filenotfounderror) * [ActiveStorage::Filename](activestorage/filename) * [ActiveStorage::FixtureSet](activestorage/fixtureset) * [ActiveStorage::IntegrityError](activestorage/integrityerror) * [ActiveStorage::InvalidDirectUploadTokenError](activestorage/invaliddirectuploadtokenerror) * [ActiveStorage::InvariableError](activestorage/invariableerror) * [ActiveStorage::MirrorJob](activestorage/mirrorjob) * [ActiveStorage::Preview](activestorage/preview) * [ActiveStorage::PreviewError](activestorage/previewerror) * [ActiveStorage::Previewer](activestorage/previewer) * [ActiveStorage::PurgeJob](activestorage/purgejob) * [ActiveStorage::Reflection::ActiveRecordExtensions::ClassMethods](activestorage/reflection/activerecordextensions/classmethods) * [ActiveStorage::Representations::ProxyController](activestorage/representations/proxycontroller) * [ActiveStorage::Representations::RedirectController](activestorage/representations/redirectcontroller) * [ActiveStorage::Service](activestorage/service) * [ActiveStorage::Service::AzureStorageService](activestorage/service/azurestorageservice) * [ActiveStorage::Service::DiskService](activestorage/service/diskservice) * [ActiveStorage::Service::GCSService](activestorage/service/gcsservice) * [ActiveStorage::Service::MirrorService](activestorage/service/mirrorservice) * [ActiveStorage::Service::S3Service](activestorage/service/s3service) * [ActiveStorage::SetCurrent](activestorage/setcurrent) * [ActiveStorage::Streaming](activestorage/streaming) * [ActiveStorage::Transformers::Transformer](activestorage/transformers/transformer) * [ActiveStorage::UnpreviewableError](activestorage/unpreviewableerror) * [ActiveStorage::UnrepresentableError](activestorage/unrepresentableerror) * [ActiveStorage::Variant](activestorage/variant) * [ActiveStorage::VariantWithRecord](activestorage/variantwithrecord) * [ActiveStorage::Variation](activestorage/variation) * [ActiveSupport](activesupport) * [ActiveSupport::ActionableError](activesupport/actionableerror) * [ActiveSupport::ActionableError::ClassMethods](activesupport/actionableerror/classmethods) * [ActiveSupport::ArrayInquirer](activesupport/arrayinquirer) * [ActiveSupport::Autoload](activesupport/autoload) * [ActiveSupport::BacktraceCleaner](activesupport/backtracecleaner) * [ActiveSupport::Benchmarkable](activesupport/benchmarkable) * [ActiveSupport::Cache](activesupport/cache) * [ActiveSupport::Cache::FileStore](activesupport/cache/filestore) * [ActiveSupport::Cache::MemCacheStore](activesupport/cache/memcachestore) * [ActiveSupport::Cache::MemoryStore](activesupport/cache/memorystore) * [ActiveSupport::Cache::NullStore](activesupport/cache/nullstore) * [ActiveSupport::Cache::RedisCacheStore](activesupport/cache/rediscachestore) * [ActiveSupport::Cache::Store](activesupport/cache/store) * [ActiveSupport::Cache::Strategy::LocalCache](activesupport/cache/strategy/localcache) * [ActiveSupport::Cache::Strategy::LocalCache::LocalStore](activesupport/cache/strategy/localcache/localstore) * [ActiveSupport::CachingKeyGenerator](activesupport/cachingkeygenerator) * [ActiveSupport::Callbacks](activesupport/callbacks) * [ActiveSupport::Callbacks::CallTemplate::MethodCall](activesupport/callbacks/calltemplate/methodcall) * [ActiveSupport::Callbacks::ClassMethods](activesupport/callbacks/classmethods) * [ActiveSupport::CompareWithRange](activesupport/comparewithrange) * [ActiveSupport::Concern](activesupport/concern) * [ActiveSupport::Concurrency::LoadInterlockAwareMonitor](activesupport/concurrency/loadinterlockawaremonitor) * [ActiveSupport::Concurrency::ShareLock](activesupport/concurrency/sharelock) * [ActiveSupport::Configurable](activesupport/configurable) * [ActiveSupport::Configurable::ClassMethods](activesupport/configurable/classmethods) * [ActiveSupport::Configurable::Configuration](activesupport/configurable/configuration) * [ActiveSupport::CurrentAttributes](activesupport/currentattributes) * [ActiveSupport::Dependencies](activesupport/dependencies) * [ActiveSupport::Dependencies::RequireDependency](activesupport/dependencies/requiredependency) * [ActiveSupport::Deprecation](activesupport/deprecation) * [ActiveSupport::Deprecation::Behavior](activesupport/deprecation/behavior) * [ActiveSupport::Deprecation::DeprecatedConstantAccessor](activesupport/deprecation/deprecatedconstantaccessor) * [ActiveSupport::Deprecation::DeprecatedConstantProxy](activesupport/deprecation/deprecatedconstantproxy) * [ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy](activesupport/deprecation/deprecatedinstancevariableproxy) * [ActiveSupport::Deprecation::DeprecatedObjectProxy](activesupport/deprecation/deprecatedobjectproxy) * [ActiveSupport::Deprecation::Disallowed](activesupport/deprecation/disallowed) * [ActiveSupport::Deprecation::MethodWrapper](activesupport/deprecation/methodwrapper) * [ActiveSupport::Deprecation::Reporting](activesupport/deprecation/reporting) * [ActiveSupport::DeprecationException](activesupport/deprecationexception) * [ActiveSupport::DescendantsTracker](activesupport/descendantstracker) * [ActiveSupport::Duration](activesupport/duration) * [ActiveSupport::EncryptedConfiguration](activesupport/encryptedconfiguration) * [ActiveSupport::ErrorReporter](activesupport/errorreporter) * [ActiveSupport::ExecutionWrapper](activesupport/executionwrapper) * [ActiveSupport::FileUpdateChecker](activesupport/fileupdatechecker) * [ActiveSupport::Gzip](activesupport/gzip) * [ActiveSupport::HashWithIndifferentAccess](activesupport/hashwithindifferentaccess) * [ActiveSupport::HashWithIndifferentAccess](activesupport/hashwithindifferentaccess) * [ActiveSupport::Inflector](activesupport/inflector) * [ActiveSupport::Inflector::Inflections](activesupport/inflector/inflections) * [ActiveSupport::InheritableOptions](activesupport/inheritableoptions) * [ActiveSupport::JSON](activesupport/json) * [ActiveSupport::KeyGenerator](activesupport/keygenerator) * [ActiveSupport::LazyLoadHooks](activesupport/lazyloadhooks) * [ActiveSupport::LogSubscriber](activesupport/logsubscriber) * [ActiveSupport::LogSubscriber::TestHelper](activesupport/logsubscriber/testhelper) * [ActiveSupport::Logger](activesupport/logger) * [ActiveSupport::Logger::SimpleFormatter](activesupport/logger/simpleformatter) * [ActiveSupport::LoggerSilence](activesupport/loggersilence) * [ActiveSupport::MessageEncryptor](activesupport/messageencryptor) * [ActiveSupport::MessageVerifier](activesupport/messageverifier) * [ActiveSupport::Multibyte](activesupport/multibyte) * [ActiveSupport::Multibyte::Chars](activesupport/multibyte/chars) * [ActiveSupport::Multibyte::Unicode](activesupport/multibyte/unicode) * [ActiveSupport::Notifications](activesupport/notifications) * [ActiveSupport::Notifications::Event](activesupport/notifications/event) * [ActiveSupport::Notifications::Instrumenter](activesupport/notifications/instrumenter) * [ActiveSupport::NumberHelper](activesupport/numberhelper) * [ActiveSupport::NumericWithFormat](activesupport/numericwithformat) * [ActiveSupport::OrderedOptions](activesupport/orderedoptions) * [ActiveSupport::ParameterFilter](activesupport/parameterfilter) * [ActiveSupport::PerThreadRegistry](activesupport/perthreadregistry) * [ActiveSupport::ProxyObject](activesupport/proxyobject) * [ActiveSupport::RangeWithFormat](activesupport/rangewithformat) * [ActiveSupport::Reloader](activesupport/reloader) * [ActiveSupport::Rescuable](activesupport/rescuable) * [ActiveSupport::Rescuable::ClassMethods](activesupport/rescuable/classmethods) * [ActiveSupport::SafeBuffer::SafeConcatError](activesupport/safebuffer/safeconcaterror) * [ActiveSupport::SecureCompareRotator](activesupport/securecomparerotator) * [ActiveSupport::SecurityUtils](activesupport/securityutils) * [ActiveSupport::StringInquirer](activesupport/stringinquirer) * [ActiveSupport::Subscriber](activesupport/subscriber) * [ActiveSupport::TaggedLogging](activesupport/taggedlogging) * [ActiveSupport::TestCase](activesupport/testcase) * [ActiveSupport::Testing::Assertions](activesupport/testing/assertions) * [ActiveSupport::Testing::ConstantLookup](activesupport/testing/constantlookup) * [ActiveSupport::Testing::Declarative](activesupport/testing/declarative) * [ActiveSupport::Testing::Deprecation](activesupport/testing/deprecation) * [ActiveSupport::Testing::FileFixtures](activesupport/testing/filefixtures) * [ActiveSupport::Testing::Isolation::Subprocess](activesupport/testing/isolation/subprocess) * [ActiveSupport::Testing::SetupAndTeardown](activesupport/testing/setupandteardown) * [ActiveSupport::Testing::SetupAndTeardown::ClassMethods](activesupport/testing/setupandteardown/classmethods) * [ActiveSupport::Testing::TimeHelpers](activesupport/testing/timehelpers) * [ActiveSupport::TimeWithZone](activesupport/timewithzone) * [ActiveSupport::TimeZone](activesupport/timezone) * [Array](array) * [Benchmark](benchmark) * [Class](class) * [Date](date) * [DateAndTime::Calculations](dateandtime/calculations) * [DateAndTime::Zones](dateandtime/zones) * [DateTime](datetime) * [Delegator](delegator) * [Digest::UUID](digest/uuid) * [ERB::Util](erb/util) * [Enumerable](enumerable) * [Enumerable::SoleItemExpectedError](enumerable/soleitemexpectederror) * [FalseClass](falseclass) * [File](file) * [Hash](hash) * [Integer](integer) * [Kernel](kernel) * [LoadError](loaderror) * [Method](method) * [Mime](mime) * [Mime::Type](mime/type) * [Module](module) * [Module::Concerning](module/concerning) * [Module::DelegationError](module/delegationerror) * [NameError](nameerror) * [NilClass](nilclass) * [Numeric](numeric) * [Object](object) * [Pathname](pathname) * [Rails::Application](rails/application) * [Rails::Application::Configuration](rails/application/configuration) * [Rails::Command](rails/command) * [Rails::Command::Actions](rails/command/actions) * [Rails::Command::Base](rails/command/base) * [Rails::Configuration::MiddlewareStackProxy](rails/configuration/middlewarestackproxy) * [Rails::ConsoleMethods](rails/consolemethods) * [Rails::Engine](rails/engine) * [Rails::Engine::Configuration](rails/engine/configuration) * [Rails::Generators](rails/generators) * [Rails::Generators::Actions](rails/generators/actions) * [Rails::Generators::ActiveModel](rails/generators/activemodel) * [Rails::Generators::AppGenerator](rails/generators/appgenerator) * [Rails::Generators::Base](rails/generators/base) * [Rails::Generators::Migration](rails/generators/migration) * [Rails::Generators::NamedBase](rails/generators/namedbase) * [Rails::Generators::TestCase](rails/generators/testcase) * [Rails::Info](rails/info) * [Rails::Paths::Path](rails/paths/path) * [Rails::Paths::Root](rails/paths/root) * [Rails::Rack::Logger](rails/rack/logger) * [Rails::Rails::Conductor::ActionMailbox::IncineratesController](rails/rails/conductor/actionmailbox/incineratescontroller) * [Rails::Rails::Conductor::ActionMailbox::ReroutesController](rails/rails/conductor/actionmailbox/reroutescontroller) * [Rails::Railtie](rails/railtie) * [Rails::Railtie::Configuration](rails/railtie/configuration) * [Rails::SourceAnnotationExtractor](rails/sourceannotationextractor) * [Rails::SourceAnnotationExtractor::Annotation](rails/sourceannotationextractor/annotation) * [Range](range) * [Regexp](regexp) * [SecureRandom](securerandom) * [Singleton](singleton) * [String](string) * [Time](time) * [TrueClass](trueclass) * [UnboundMethod](unboundmethod)
programming_docs
rails module Enumerable module Enumerable ================== compact\_blank() Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 218 def compact_blank reject(&:blank?) end ``` Returns a new `Array` without the blank items. Uses [`Object#blank?`](object#method-i-blank-3F) for determining if an item is blank. ``` [1, "", nil, 2, " ", [], {}, false, true].compact_blank # => [1, 2, true] Set.new([nil, "", 1, 2]) # => [2, 1] (or [1, 2]) ``` When called on a `Hash`, returns a new `Hash` without the blank values. ``` { a: "", b: 1, c: nil, d: [], e: false, f: true }.compact_blank # => { b: 1, f: true } ``` exclude?(object) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 152 def exclude?(object) !include?(object) end ``` The negative of the `Enumerable#include?`. Returns `true` if the collection does not include the object. excluding(\*elements) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 166 def excluding(*elements) elements.flatten!(1) reject { |element| elements.include?(element) } end ``` Returns a copy of the enumerable excluding the specified elements. ``` ["David", "Rafael", "Aaron", "Todd"].excluding "Aaron", "Todd" # => ["David", "Rafael"] ["David", "Rafael", "Aaron", "Todd"].excluding %w[ Aaron Todd ] # => ["David", "Rafael"] {foo: 1, bar: 2, baz: 3}.excluding :bar # => {foo: 1, baz: 3} ``` Also aliased as: [without](enumerable#method-i-without) in\_order\_of(key, series) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 230 def in_order_of(key, series) index_by(&key).values_at(*series).compact end ``` Returns a new `Array` where the order has been set to that provided in the `series`, based on the `key` of the objects in the original enumerable. ``` [ Person.find(5), Person.find(3), Person.find(1) ].in_order_of(:id, [ 1, 5, 3 ]) # => [ Person.find(1), Person.find(5), Person.find(3) ] ``` If the `series` include keys that have no corresponding element in the [`Enumerable`](enumerable), these are ignored. If the [`Enumerable`](enumerable) has additional elements that aren't named in the `series`, these are not included in the result. including(\*elements) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 146 def including(*elements) to_a.including(*elements) end ``` Returns a new array that includes the passed elements. ``` [ 1, 2, 3 ].including(4, 5) # => [ 1, 2, 3, 4, 5 ] ["David", "Rafael"].including %w[ Aaron Todd ] # => ["David", "Rafael", "Aaron", "Todd"] ``` index\_by() { |elem| ... } Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 86 def index_by if block_given? result = {} each { |elem| result[yield(elem)] = elem } result else to_enum(:index_by) { size if respond_to?(:size) } end end ``` Convert an enumerable to a hash, using the block result as the key and the element as the value. ``` people.index_by(&:login) # => { "nextangle" => <Person ...>, "chade-" => <Person ...>, ...} people.index_by { |person| "#{person.first_name} #{person.last_name}" } # => { "Chade- Fowlersburg-e" => <Person ...>, "David Heinemeier Hansson" => <Person ...>, ...} ``` index\_with(default = INDEX\_WITH\_DEFAULT) { |elem| ... } Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 109 def index_with(default = INDEX_WITH_DEFAULT) if block_given? result = {} each { |elem| result[elem] = yield(elem) } result elsif default != INDEX_WITH_DEFAULT result = {} each { |elem| result[elem] = default } result else to_enum(:index_with) { size if respond_to?(:size) } end end ``` Convert an enumerable to a hash, using the element as the key and the block result as the value. ``` post = Post.new(title: "hey there", body: "what's up?") %i( title body ).index_with { |attr_name| post.public_send(attr_name) } # => { title: "hey there", body: "what's up?" } ``` If an argument is passed instead of a block, it will be used as the value for all elements: ``` %i( created_at updated_at ).index_with(Time.now) # => { created_at: 2020-03-09 22:31:47, updated_at: 2020-03-09 22:31:47 } ``` many?() { |element| ... } Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 127 def many? cnt = 0 if block_given? any? do |element| cnt += 1 if yield element cnt > 1 end else any? { (cnt += 1) > 1 } end end ``` Returns `true` if the enumerable has more than 1 element. Functionally equivalent to `enum.to_a.size > 1`. Can be called with a block too, much like any?, so `people.many? { |p| p.age > 26 }` returns `true` if more than one person is over 26. maximum(key) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 35 def maximum(key) map(&key).max end ``` Calculates the maximum from the extracted elements. ``` payments = [Payment.new(5), Payment.new(15), Payment.new(10)] payments.maximum(:price) # => 15 ``` minimum(key) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 27 def minimum(key) map(&key).min end ``` Calculates the minimum from the extracted elements. ``` payments = [Payment.new(5), Payment.new(15), Payment.new(10)] payments.minimum(:price) # => 5 ``` pick(\*keys) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 195 def pick(*keys) return if none? if keys.many? keys.map { |key| first[key] } else first[keys.first] end end ``` Extract the given key from the first element in the enumerable. ``` [{ name: "David" }, { name: "Rafael" }, { name: "Aaron" }].pick(:name) # => "David" [{ id: 1, name: "David" }, { id: 2, name: "Rafael" }].pick(:id, :name) # => [1, "David"] ``` pluck(\*keys) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 179 def pluck(*keys) if keys.many? map { |element| keys.map { |key| element[key] } } else key = keys.first map { |element| element[key] } end end ``` Extract the given key from each element in the enumerable. ``` [{ name: "David" }, { name: "Rafael" }, { name: "Aaron" }].pluck(:name) # => ["David", "Rafael", "Aaron"] [{ id: 1, name: "David" }, { id: 2, name: "Rafael" }].pluck(:id, :name) # => [[1, "David"], [2, "Rafael"]] ``` sole() Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 240 def sole case count when 1 then return first # rubocop:disable Style/RedundantReturn when 0 then raise SoleItemExpectedError, "no item found" when 2.. then raise SoleItemExpectedError, "multiple items found" end end ``` Returns the sole item in the enumerable. If there are no items, or more than one item, raises `Enumerable::SoleItemExpectedError`. ``` ["x"].sole # => "x" Set.new.sole # => Enumerable::SoleItemExpectedError: no item found { a: 1, b: 2 }.sole # => Enumerable::SoleItemExpectedError: multiple items found ``` sum(identity = nil, &block) Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 57 def sum(identity = nil, &block) if identity _original_sum_with_required_identity(identity, &block) elsif block_given? map(&block).sum # we check `first(1) == []` to check if we have an # empty Enumerable; checking `empty?` would return # true for `[nil]`, which we want to deprecate to # keep consistent with Ruby elsif first.is_a?(Numeric) || first(1) == [] identity ||= 0 _original_sum_with_required_identity(identity, &block) else ActiveSupport::Deprecation.warn(<<-MSG.squish) Rails 7.0 has deprecated Enumerable.sum in favor of Ruby's native implementation available since 2.4. Sum of non-numeric elements requires an initial argument. MSG inject(:+) || 0 end end ``` Calculates a sum from the elements. ``` payments.sum { |p| p.price * p.tax_rate } payments.sum(&:price) ``` The latter is a shortcut for: ``` payments.inject(0) { |sum, p| sum + p.price } ``` It can also calculate the sum without the use of a block. ``` [5, 15, 10].sum # => 30 ['foo', 'bar'].sum('') # => "foobar" [[1, 2], [3, 1, 5]].sum([]) # => [1, 2, 3, 1, 5] ``` The default sum of an empty list is zero. You can override this default: ``` [].sum(Payment.new(0)) { |i| i.amount } # => Payment.new(0) ``` without(\*elements) Alias for: [excluding](enumerable#method-i-excluding) rails class NameError class NameError ================ Parent: [Object](object) missing\_name() Show source ``` # File activesupport/lib/active_support/core_ext/name_error.rb, line 12 def missing_name # Since ruby v2.3.0 `did_you_mean` gem is loaded by default. # It extends NameError#message with spell corrections which are SLOW. # We should use original_message message instead. message = respond_to?(:original_message) ? original_message : self.message return unless message.start_with?("uninitialized constant ") receiver = begin self.receiver rescue ArgumentError nil end if receiver == Object name.to_s elsif receiver "#{real_mod_name(receiver)}::#{self.name}" else if match = message.match(/((::)?([A-Z]\w*)(::[A-Z]\w*)*)$/) match[1] end end end ``` Extract the name of the missing constant from the exception message. ``` begin HelloWorld rescue NameError => e e.missing_name end # => "HelloWorld" ``` missing\_name?(name) Show source ``` # File activesupport/lib/active_support/core_ext/name_error.rb, line 44 def missing_name?(name) if name.is_a? Symbol self.name == name else missing_name == name.to_s end end ``` Was this exception raised because the given name was missing? ``` begin HelloWorld rescue NameError => e e.missing_name?("HelloWorld") end # => true ``` rails class Module class Module ============= Parent: [Object](object) Included modules: [Module::Concerning](module/concerning) Attribute Accessors ------------------- Extends the module object with class/module and instance accessors for class/module attributes, just like the native attr\* accessors for instance attributes. Attribute Accessors per Thread ------------------------------ Extends the module object with class/module and instance accessors for class/module attributes, just like the native attr\* accessors for instance attributes, but does so on a per-thread basis. So the values are scoped within the Thread.current space under the class name of the module. Note that it can also be scoped per-fiber if [`Rails.application`](rails#method-c-application).config.active\_support.isolation\_level is set to `:fiber` DELEGATION\_RESERVED\_KEYWORDS DELEGATION\_RESERVED\_METHOD\_NAMES RUBY\_RESERVED\_KEYWORDS attr\_internal\_naming\_format[RW] alias\_attribute(new\_name, old\_name) Show source ``` # File activesupport/lib/active_support/core_ext/module/aliasing.rb, line 21 def alias_attribute(new_name, old_name) # The following reader methods use an explicit `self` receiver in order to # support aliases that start with an uppercase letter. Otherwise, they would # be resolved as constants instead. module_eval <<-STR, __FILE__, __LINE__ + 1 def #{new_name}; self.#{old_name}; end # def subject; self.title; end def #{new_name}?; self.#{old_name}?; end # def subject?; self.title?; end def #{new_name}=(v); self.#{old_name} = v; end # def subject=(v); self.title = v; end STR end ``` Allows you to make aliases for attributes, which includes getter, setter, and a predicate. ``` class Content < ActiveRecord::Base # has a title attribute end class Email < Content alias_attribute :subject, :title end e = Email.find(1) e.title # => "Superstars" e.subject # => "Superstars" e.subject? # => true e.subject = "Megastars" e.title # => "Megastars" ``` anonymous?() Show source ``` # File activesupport/lib/active_support/core_ext/module/anonymous.rb, line 27 def anonymous? name.nil? end ``` A module may or may not have a name. ``` module M; end M.name # => "M" m = Module.new m.name # => nil ``` `anonymous?` method returns true if module does not have a name, false otherwise: ``` Module.new.anonymous? # => true module M; end M.anonymous? # => false ``` A module gets a name when it is first assigned to a constant. Either via the `module` or `class` keyword or by an explicit assignment: ``` m = Module.new # creates an anonymous module m.anonymous? # => true M = m # m gets a name here as a side-effect m.name # => "M" m.anonymous? # => false ``` attr\_internal(\*attrs) Alias for: [attr\_internal\_accessor](module#method-i-attr_internal_accessor) attr\_internal\_accessor(\*attrs) Show source ``` # File activesupport/lib/active_support/core_ext/module/attr_internal.rb, line 16 def attr_internal_accessor(*attrs) attr_internal_reader(*attrs) attr_internal_writer(*attrs) end ``` Declares an attribute reader and writer backed by an internally-named instance variable. Also aliased as: [attr\_internal](module#method-i-attr_internal) attr\_internal\_reader(\*attrs) Show source ``` # File activesupport/lib/active_support/core_ext/module/attr_internal.rb, line 5 def attr_internal_reader(*attrs) attrs.each { |attr_name| attr_internal_define(attr_name, :reader) } end ``` Declares an attribute reader backed by an internally-named instance variable. attr\_internal\_writer(\*attrs) Show source ``` # File activesupport/lib/active_support/core_ext/module/attr_internal.rb, line 10 def attr_internal_writer(*attrs) attrs.each { |attr_name| attr_internal_define(attr_name, :writer) } end ``` Declares an attribute writer backed by an internally-named instance variable. cattr\_accessor(\*syms, instance\_reader: true, instance\_writer: true, instance\_accessor: true, default: nil, &blk) Alias for: [mattr\_accessor](module#method-i-mattr_accessor) cattr\_reader(\*syms, instance\_reader: true, instance\_accessor: true, default: nil, location: nil) Alias for: [mattr\_reader](module#method-i-mattr_reader) cattr\_writer(\*syms, instance\_writer: true, instance\_accessor: true, default: nil, location: nil) Alias for: [mattr\_writer](module#method-i-mattr_writer) delegate(\*methods, to: nil, prefix: nil, allow\_nil: nil, private: nil) Show source ``` # File activesupport/lib/active_support/core_ext/module/delegation.rb, line 171 def delegate(*methods, to: nil, prefix: nil, allow_nil: nil, private: nil) unless to raise ArgumentError, "Delegation needs a target. Supply a keyword argument 'to' (e.g. delegate :hello, to: :greeter)." end if prefix == true && /^[^a-z_]/.match?(to) raise ArgumentError, "Can only automatically set the delegation prefix when delegating to a method." end method_prefix = \ if prefix "#{prefix == true ? to : prefix}_" else "" end location = caller_locations(1, 1).first file, line = location.path, location.lineno to = to.to_s to = "self.#{to}" if DELEGATION_RESERVED_METHOD_NAMES.include?(to) method_def = [] method_names = [] methods.map do |method| method_name = prefix ? "#{method_prefix}#{method}" : method method_names << method_name.to_sym # Attribute writer methods only accept one argument. Makes sure []= # methods still accept two arguments. definition = /[^\]]=\z/.match?(method) ? "arg" : "..." # The following generated method calls the target exactly once, storing # the returned value in a dummy variable. # # Reason is twofold: On one hand doing less calls is in general better. # On the other hand it could be that the target has side-effects, # whereas conceptually, from the user point of view, the delegator should # be doing one call. if allow_nil method = method.to_s method_def << "def #{method_name}(#{definition})" << " _ = #{to}" << " if !_.nil? || nil.respond_to?(:#{method})" << " _.#{method}(#{definition})" << " end" << "end" else method = method.to_s method_name = method_name.to_s method_def << "def #{method_name}(#{definition})" << " _ = #{to}" << " _.#{method}(#{definition})" << "rescue NoMethodError => e" << " if _.nil? && e.name == :#{method}" << %( raise DelegationError, "#{self}##{method_name} delegated to #{to}.#{method}, but #{to} is nil: \#{self.inspect}") << " else" << " raise" << " end" << "end" end end module_eval(method_def.join(";"), file, line) private(*method_names) if private method_names end ``` Provides a `delegate` class method to easily expose contained objects' public methods as your own. #### Options * `:to` - Specifies the target object name as a symbol or string * `:prefix` - Prefixes the new method with the target name or a custom prefix * `:allow_nil` - If set to true, prevents a `Module::DelegationError` from being raised * `:private` - If set to true, changes method visibility to private The macro receives one or more method names (specified as symbols or strings) and the name of the target object via the `:to` option (also a symbol or string). Delegation is particularly useful with Active Record associations: ``` class Greeter < ActiveRecord::Base def hello 'hello' end def goodbye 'goodbye' end end class Foo < ActiveRecord::Base belongs_to :greeter delegate :hello, to: :greeter end Foo.new.hello # => "hello" Foo.new.goodbye # => NoMethodError: undefined method `goodbye' for #<Foo:0x1af30c> ``` Multiple delegates to the same target are allowed: ``` class Foo < ActiveRecord::Base belongs_to :greeter delegate :hello, :goodbye, to: :greeter end Foo.new.goodbye # => "goodbye" ``` Methods can be delegated to instance variables, class variables, or constants by providing them as a symbols: ``` class Foo CONSTANT_ARRAY = [0,1,2,3] @@class_array = [4,5,6,7] def initialize @instance_array = [8,9,10,11] end delegate :sum, to: :CONSTANT_ARRAY delegate :min, to: :@@class_array delegate :max, to: :@instance_array end Foo.new.sum # => 6 Foo.new.min # => 4 Foo.new.max # => 11 ``` It's also possible to delegate a method to the class by using `:class`: ``` class Foo def self.hello "world" end delegate :hello, to: :class end Foo.new.hello # => "world" ``` Delegates can optionally be prefixed using the `:prefix` option. If the value is `true`, the delegate methods are prefixed with the name of the object being delegated to. ``` Person = Struct.new(:name, :address) class Invoice < Struct.new(:client) delegate :name, :address, to: :client, prefix: true end john_doe = Person.new('John Doe', 'Vimmersvej 13') invoice = Invoice.new(john_doe) invoice.client_name # => "John Doe" invoice.client_address # => "Vimmersvej 13" ``` It is also possible to supply a custom prefix. ``` class Invoice < Struct.new(:client) delegate :name, :address, to: :client, prefix: :customer end invoice = Invoice.new(john_doe) invoice.customer_name # => 'John Doe' invoice.customer_address # => 'Vimmersvej 13' ``` The delegated methods are public by default. Pass `private: true` to change that. ``` class User < ActiveRecord::Base has_one :profile delegate :first_name, to: :profile delegate :date_of_birth, to: :profile, private: true def age Date.today.year - date_of_birth.year end end User.new.first_name # => "Tomas" User.new.date_of_birth # => NoMethodError: private method `date_of_birth' called for #<User:0x00000008221340> User.new.age # => 2 ``` If the target is `nil` and does not respond to the delegated method a `Module::DelegationError` is raised. If you wish to instead return `nil`, use the `:allow_nil` option. ``` class User < ActiveRecord::Base has_one :profile delegate :age, to: :profile end User.new.age # => Module::DelegationError: User#age delegated to profile.age, but profile is nil ``` But if not having a profile yet is fine and should not be an error condition: ``` class User < ActiveRecord::Base has_one :profile delegate :age, to: :profile, allow_nil: true end User.new.age # nil ``` Note that if the target is not `nil` then the call is attempted regardless of the `:allow_nil` option, and thus an exception is still raised if said object does not respond to the method: ``` class Foo def initialize(bar) @bar = bar end delegate :name, to: :@bar, allow_nil: true end Foo.new("Bar").name # raises NoMethodError: undefined method `name' ``` The target method must be public, otherwise it will raise `NoMethodError`. delegate\_missing\_to(target, allow\_nil: nil) Show source ``` # File activesupport/lib/active_support/core_ext/module/delegation.rb, line 289 def delegate_missing_to(target, allow_nil: nil) target = target.to_s target = "self.#{target}" if DELEGATION_RESERVED_METHOD_NAMES.include?(target) module_eval <<-RUBY, __FILE__, __LINE__ + 1 def respond_to_missing?(name, include_private = false) # It may look like an oversight, but we deliberately do not pass # +include_private+, because they do not get delegated. return false if name == :marshal_dump || name == :_dump #{target}.respond_to?(name) || super end def method_missing(method, *args, &block) if #{target}.respond_to?(method) #{target}.public_send(method, *args, &block) else begin super rescue NoMethodError if #{target}.nil? if #{allow_nil == true} nil else raise DelegationError, "\#{method} delegated to #{target}, but #{target} is nil" end else raise end end end end ruby2_keywords(:method_missing) RUBY end ``` When building decorators, a common pattern may emerge: ``` class Partition def initialize(event) @event = event end def person detail.person || creator end private def respond_to_missing?(name, include_private = false) @event.respond_to?(name, include_private) end def method_missing(method, *args, &block) @event.send(method, *args, &block) end end ``` With `Module#delegate_missing_to`, the above is condensed to: ``` class Partition delegate_missing_to :@event def initialize(event) @event = event end def person detail.person || creator end end ``` The target can be anything callable within the object, e.g. instance variables, methods, constants, etc. The delegated method must be public on the target, otherwise it will raise `DelegationError`. If you wish to instead return `nil`, use the `:allow_nil` option. The `marshal_dump` and `_dump` methods are exempt from delegation due to possible interference when calling `Marshal.dump(object)`, should the delegation target method of `object` add or remove instance variables. deprecate(\*method\_names) Show source ``` # File activesupport/lib/active_support/core_ext/module/deprecation.rb, line 22 def deprecate(*method_names) ActiveSupport::Deprecation.deprecate_methods(self, *method_names) end ``` ``` deprecate :foo deprecate bar: 'message' deprecate :foo, :bar, baz: 'warning!', qux: 'gone!' ``` You can also use custom deprecator instance: ``` deprecate :foo, deprecator: MyLib::Deprecator.new deprecate :foo, bar: "warning!", deprecator: MyLib::Deprecator.new ``` Custom deprecators must respond to `deprecation_warning(deprecated_method_name, message, caller_backtrace)` method where you can implement your custom warning behavior. ``` class MyLib::Deprecator def deprecation_warning(deprecated_method_name, message, caller_backtrace = nil) message = "#{deprecated_method_name} is deprecated and will be removed from MyLibrary | #{message}" Kernel.warn message end end ``` mattr\_accessor(\*syms, instance\_reader: true, instance\_writer: true, instance\_accessor: true, default: nil, &blk) Show source ``` # File activesupport/lib/active_support/core_ext/module/attribute_accessors.rb, line 202 def mattr_accessor(*syms, instance_reader: true, instance_writer: true, instance_accessor: true, default: nil, &blk) location = caller_locations(1, 1).first mattr_reader(*syms, instance_reader: instance_reader, instance_accessor: instance_accessor, default: default, location: location, &blk) mattr_writer(*syms, instance_writer: instance_writer, instance_accessor: instance_accessor, default: default, location: location) end ``` Defines both class and instance accessors for class attributes. All class and instance methods created will be public, even if this method is called with a private or protected access modifier. ``` module HairColors mattr_accessor :hair_colors end class Person include HairColors end HairColors.hair_colors = [:brown, :black, :blonde, :red] HairColors.hair_colors # => [:brown, :black, :blonde, :red] Person.new.hair_colors # => [:brown, :black, :blonde, :red] ``` If a subclass changes the value then that would also change the value for parent class. Similarly if parent class changes the value then that would change the value of subclasses too. ``` class Citizen < Person end Citizen.new.hair_colors << :blue Person.new.hair_colors # => [:brown, :black, :blonde, :red, :blue] ``` To omit the instance writer method, pass `instance_writer: false`. To omit the instance reader method, pass `instance_reader: false`. ``` module HairColors mattr_accessor :hair_colors, instance_writer: false, instance_reader: false end class Person include HairColors end Person.new.hair_colors = [:brown] # => NoMethodError Person.new.hair_colors # => NoMethodError ``` Or pass `instance_accessor: false`, to omit both instance methods. ``` module HairColors mattr_accessor :hair_colors, instance_accessor: false end class Person include HairColors end Person.new.hair_colors = [:brown] # => NoMethodError Person.new.hair_colors # => NoMethodError ``` You can set a default value for the attribute. ``` module HairColors mattr_accessor :hair_colors, default: [:brown, :black, :blonde, :red] end class Person include HairColors end Person.class_variable_get("@@hair_colors") # => [:brown, :black, :blonde, :red] ``` Also aliased as: [cattr\_accessor](module#method-i-cattr_accessor) mattr\_reader(\*syms, instance\_reader: true, instance\_accessor: true, default: nil, location: nil) { |: default| ... } Show source ``` # File activesupport/lib/active_support/core_ext/module/attribute_accessors.rb, line 53 def mattr_reader(*syms, instance_reader: true, instance_accessor: true, default: nil, location: nil) raise TypeError, "module attributes should be defined directly on class, not singleton" if singleton_class? location ||= caller_locations(1, 1).first definition = [] syms.each do |sym| raise NameError.new("invalid attribute name: #{sym}") unless /\A[_A-Za-z]\w*\z/.match?(sym) definition << "def self.#{sym}; @@#{sym}; end" if instance_reader && instance_accessor definition << "def #{sym}; @@#{sym}; end" end sym_default_value = (block_given? && default.nil?) ? yield : default class_variable_set("@@#{sym}", sym_default_value) unless sym_default_value.nil? && class_variable_defined?("@@#{sym}") end module_eval(definition.join(";"), location.path, location.lineno) end ``` Defines a class attribute and creates a class and instance reader methods. The underlying class variable is set to `nil`, if it is not previously defined. All class and instance methods created will be public, even if this method is called with a private or protected access modifier. ``` module HairColors mattr_reader :hair_colors end HairColors.hair_colors # => nil HairColors.class_variable_set("@@hair_colors", [:brown, :black]) HairColors.hair_colors # => [:brown, :black] ``` The attribute name must be a valid method name in Ruby. ``` module Foo mattr_reader :"1_Badname" end # => NameError: invalid attribute name: 1_Badname ``` To omit the instance reader method, pass `instance_reader: false` or `instance_accessor: false`. ``` module HairColors mattr_reader :hair_colors, instance_reader: false end class Person include HairColors end Person.new.hair_colors # => NoMethodError ``` You can set a default value for the attribute. ``` module HairColors mattr_reader :hair_colors, default: [:brown, :black, :blonde, :red] end class Person include HairColors end Person.new.hair_colors # => [:brown, :black, :blonde, :red] ``` Also aliased as: [cattr\_reader](module#method-i-cattr_reader) mattr\_writer(\*syms, instance\_writer: true, instance\_accessor: true, default: nil, location: nil) { |: default| ... } Show source ``` # File activesupport/lib/active_support/core_ext/module/attribute_accessors.rb, line 117 def mattr_writer(*syms, instance_writer: true, instance_accessor: true, default: nil, location: nil) raise TypeError, "module attributes should be defined directly on class, not singleton" if singleton_class? location ||= caller_locations(1, 1).first definition = [] syms.each do |sym| raise NameError.new("invalid attribute name: #{sym}") unless /\A[_A-Za-z]\w*\z/.match?(sym) definition << "def self.#{sym}=(val); @@#{sym} = val; end" if instance_writer && instance_accessor definition << "def #{sym}=(val); @@#{sym} = val; end" end sym_default_value = (block_given? && default.nil?) ? yield : default class_variable_set("@@#{sym}", sym_default_value) unless sym_default_value.nil? && class_variable_defined?("@@#{sym}") end module_eval(definition.join(";"), location.path, location.lineno) end ``` Defines a class attribute and creates a class and instance writer methods to allow assignment to the attribute. All class and instance methods created will be public, even if this method is called with a private or protected access modifier. ``` module HairColors mattr_writer :hair_colors end class Person include HairColors end HairColors.hair_colors = [:brown, :black] Person.class_variable_get("@@hair_colors") # => [:brown, :black] Person.new.hair_colors = [:blonde, :red] HairColors.class_variable_get("@@hair_colors") # => [:blonde, :red] ``` To omit the instance writer method, pass `instance_writer: false` or `instance_accessor: false`. ``` module HairColors mattr_writer :hair_colors, instance_writer: false end class Person include HairColors end Person.new.hair_colors = [:blonde, :red] # => NoMethodError ``` You can set a default value for the attribute. ``` module HairColors mattr_writer :hair_colors, default: [:brown, :black, :blonde, :red] end class Person include HairColors end Person.class_variable_get("@@hair_colors") # => [:brown, :black, :blonde, :red] ``` Also aliased as: [cattr\_writer](module#method-i-cattr_writer) module\_parent() Show source ``` # File activesupport/lib/active_support/core_ext/module/introspection.rb, line 35 def module_parent module_parent_name ? ActiveSupport::Inflector.constantize(module_parent_name) : Object end ``` Returns the module which contains this one according to its name. ``` module M module N end end X = M::N M::N.module_parent # => M X.module_parent # => M ``` The parent of top-level and anonymous modules is [`Object`](object). ``` M.module_parent # => Object Module.new.module_parent # => Object ``` module\_parent\_name() Show source ``` # File activesupport/lib/active_support/core_ext/module/introspection.rb, line 10 def module_parent_name if defined?(@parent_name) @parent_name else parent_name = name =~ /::[^:]+\z/ ? -$` : nil @parent_name = parent_name unless frozen? parent_name end end ``` Returns the name of the module containing this one. ``` M::N.module_parent_name # => "M" ``` module\_parents() Show source ``` # File activesupport/lib/active_support/core_ext/module/introspection.rb, line 51 def module_parents parents = [] if module_parent_name parts = module_parent_name.split("::") until parts.empty? parents << ActiveSupport::Inflector.constantize(parts * "::") parts.pop end end parents << Object unless parents.include? Object parents end ``` Returns all the parents of this module according to its name, ordered from nested outwards. The receiver is not contained within the result. ``` module M module N end end X = M::N M.module_parents # => [Object] M::N.module_parents # => [M, Object] X.module_parents # => [M, Object] ``` redefine\_method(method, &block) Show source ``` # File activesupport/lib/active_support/core_ext/module/redefine_method.rb, line 17 def redefine_method(method, &block) visibility = method_visibility(method) silence_redefinition_of_method(method) define_method(method, &block) send(visibility, method) end ``` Replaces the existing method definition, if there is one, with the passed block as its body. redefine\_singleton\_method(method, &block) Show source ``` # File activesupport/lib/active_support/core_ext/module/redefine_method.rb, line 26 def redefine_singleton_method(method, &block) singleton_class.redefine_method(method, &block) end ``` Replaces the existing singleton method definition, if there is one, with the passed block as its body. remove\_possible\_method(method) Show source ``` # File activesupport/lib/active_support/core_ext/module/remove_method.rb, line 7 def remove_possible_method(method) if method_defined?(method) || private_method_defined?(method) undef_method(method) end end ``` Removes the named method, if it exists. remove\_possible\_singleton\_method(method) Show source ``` # File activesupport/lib/active_support/core_ext/module/remove_method.rb, line 14 def remove_possible_singleton_method(method) singleton_class.remove_possible_method(method) end ``` Removes the named singleton method, if it exists. silence\_redefinition\_of\_method(method) Show source ``` # File activesupport/lib/active_support/core_ext/module/redefine_method.rb, line 7 def silence_redefinition_of_method(method) if method_defined?(method) || private_method_defined?(method) # This suppresses the "method redefined" warning; the self-alias # looks odd, but means we don't need to generate a unique name alias_method method, method end end ``` Marks the named method as intended to be redefined, if it exists. Suppresses the Ruby method redefinition warning. Prefer [`redefine_method`](module#method-i-redefine_method) where possible. thread\_cattr\_accessor(\*syms, instance\_reader: true, instance\_writer: true, instance\_accessor: true, default: nil) Alias for: [thread\_mattr\_accessor](module#method-i-thread_mattr_accessor) thread\_mattr\_accessor(\*syms, instance\_reader: true, instance\_writer: true, instance\_accessor: true, default: nil) Show source ``` # File activesupport/lib/active_support/core_ext/module/attribute_accessors_per_thread.rb, line 152 def thread_mattr_accessor(*syms, instance_reader: true, instance_writer: true, instance_accessor: true, default: nil) thread_mattr_reader(*syms, instance_reader: instance_reader, instance_accessor: instance_accessor, default: default) thread_mattr_writer(*syms, instance_writer: instance_writer, instance_accessor: instance_accessor) end ``` Defines both class and instance accessors for class attributes. ``` class Account thread_mattr_accessor :user end Account.user = "DHH" Account.user # => "DHH" Account.new.user # => "DHH" ``` Unlike `mattr\_accessor`, values are **not** shared with subclasses or parent classes. If a subclass changes the value, the parent class' value is not changed. If the parent class changes the value, the value of subclasses is not changed. ``` class Customer < Account end Account.user # => "DHH" Customer.user # => nil Customer.user = "Rafael" Customer.user # => "Rafael" Account.user # => "DHH" ``` To omit the instance writer method, pass `instance_writer: false`. To omit the instance reader method, pass `instance_reader: false`. ``` class Current thread_mattr_accessor :user, instance_writer: false, instance_reader: false end Current.new.user = "DHH" # => NoMethodError Current.new.user # => NoMethodError ``` Or pass `instance_accessor: false`, to omit both instance methods. ``` class Current thread_mattr_accessor :user, instance_accessor: false end Current.new.user = "DHH" # => NoMethodError Current.new.user # => NoMethodError ``` Also aliased as: [thread\_cattr\_accessor](module#method-i-thread_cattr_accessor)
programming_docs
rails class Delegator class Delegator ================ Parent: [Object](object) try(\*args, &block) Show source ``` # File activesupport/lib/active_support/core_ext/object/try.rb, line 121 ``` See [`Object#try`](object#method-i-try) try!(\*args, &block) Show source ``` # File activesupport/lib/active_support/core_ext/object/try.rb, line 129 ``` See [`Object#try!`](object#method-i-try-21) rails class Hash class Hash =========== Parent: [Object](object) from\_trusted\_xml(xml) Show source ``` # File activesupport/lib/active_support/core_ext/hash/conversions.rb, line 134 def from_trusted_xml(xml) from_xml xml, [] end ``` Builds a [`Hash`](hash) from XML just like `Hash.from_xml`, but also allows `Symbol` and YAML. from\_xml(xml, disallowed\_types = nil) Show source ``` # File activesupport/lib/active_support/core_ext/hash/conversions.rb, line 129 def from_xml(xml, disallowed_types = nil) ActiveSupport::XMLConverter.new(xml, disallowed_types).to_h end ``` Returns a [`Hash`](hash) containing a collection of pairs when the key is the node name and the value is its content ``` xml = <<-XML <?xml version="1.0" encoding="UTF-8"?> <hash> <foo type="integer">1</foo> <bar type="integer">2</bar> </hash> XML hash = Hash.from_xml(xml) # => {"hash"=>{"foo"=>1, "bar"=>2}} ``` `DisallowedType` is raised if the XML contains attributes with `type="yaml"` or `type="symbol"`. Use `Hash.from_trusted_xml` to parse this XML. Custom `disallowed_types` can also be passed in the form of an array. ``` xml = <<-XML <?xml version="1.0" encoding="UTF-8"?> <hash> <foo type="integer">1</foo> <bar type="string">"David"</bar> </hash> XML hash = Hash.from_xml(xml, ['integer']) # => ActiveSupport::XMLConverter::DisallowedType: Disallowed type attribute: "integer" ``` Note that passing custom disallowed types will override the default types, which are `Symbol` and YAML. assert\_valid\_keys(\*valid\_keys) Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 48 def assert_valid_keys(*valid_keys) valid_keys.flatten! each_key do |k| unless valid_keys.include?(k) raise ArgumentError.new("Unknown key: #{k.inspect}. Valid keys are: #{valid_keys.map(&:inspect).join(', ')}") end end end ``` Validates all keys in a hash match `*valid_keys`, raising `ArgumentError` on a mismatch. Note that keys are treated differently than [`HashWithIndifferentAccess`](activesupport/hashwithindifferentaccess), meaning that string and symbol keys will not match. ``` { name: 'Rob', years: '28' }.assert_valid_keys(:name, :age) # => raises "ArgumentError: Unknown key: :years. Valid keys are: :name, :age" { name: 'Rob', age: '28' }.assert_valid_keys('name', 'age') # => raises "ArgumentError: Unknown key: :name. Valid keys are: 'name', 'age'" { name: 'Rob', age: '28' }.assert_valid_keys(:name, :age) # => passes, raises nothing ``` compact\_blank!() Show source ``` # File activesupport/lib/active_support/core_ext/enumerable.rb, line 261 def compact_blank! # use delete_if rather than reject! because it always returns self even if nothing changed delete_if { |_k, v| v.blank? } end ``` Removes all blank values from the `Hash` in place and returns self. Uses [`Object#blank?`](object#method-i-blank-3F) for determining if a value is blank. ``` h = { a: "", b: 1, c: nil, d: [], e: false, f: true } h.compact_blank! # => { b: 1, f: true } ``` deep\_dup() Show source ``` # File activesupport/lib/active_support/core_ext/object/deep_dup.rb, line 43 def deep_dup hash = dup each_pair do |key, value| if ::String === key || ::Symbol === key hash[key] = value.deep_dup else hash.delete(key) hash[key.deep_dup] = value.deep_dup end end hash end ``` Returns a deep copy of hash. ``` hash = { a: { b: 'b' } } dup = hash.deep_dup dup[:a][:c] = 'c' hash[:a][:c] # => nil dup[:a][:c] # => "c" ``` deep\_merge(other\_hash, &block) Show source ``` # File activesupport/lib/active_support/core_ext/hash/deep_merge.rb, line 18 def deep_merge(other_hash, &block) dup.deep_merge!(other_hash, &block) end ``` Returns a new hash with `self` and `other_hash` merged recursively. ``` h1 = { a: true, b: { c: [1, 2, 3] } } h2 = { a: false, b: { x: [3, 4, 5] } } h1.deep_merge(h2) # => { a: false, b: { c: [1, 2, 3], x: [3, 4, 5] } } ``` Like with Hash#merge in the standard library, a block can be provided to merge values: ``` h1 = { a: 100, b: 200, c: { c1: 100 } } h2 = { b: 250, c: { c1: 200 } } h1.deep_merge(h2) { |key, this_val, other_val| this_val + other_val } # => { a: 100, b: 450, c: { c1: 300 } } ``` deep\_merge!(other\_hash, &block) Show source ``` # File activesupport/lib/active_support/core_ext/hash/deep_merge.rb, line 23 def deep_merge!(other_hash, &block) merge!(other_hash) do |key, this_val, other_val| if this_val.is_a?(Hash) && other_val.is_a?(Hash) this_val.deep_merge(other_val, &block) elsif block_given? block.call(key, this_val, other_val) else other_val end end end ``` Same as `deep_merge`, but modifies `self`. deep\_stringify\_keys() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 84 def deep_stringify_keys deep_transform_keys(&:to_s) end ``` Returns a new hash with all keys converted to strings. This includes the keys from the root hash and from all nested hashes and arrays. ``` hash = { person: { name: 'Rob', age: '28' } } hash.deep_stringify_keys # => {"person"=>{"name"=>"Rob", "age"=>"28"}} ``` deep\_stringify\_keys!() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 91 def deep_stringify_keys! deep_transform_keys!(&:to_s) end ``` Destructively converts all keys to strings. This includes the keys from the root hash and from all nested hashes and arrays. deep\_symbolize\_keys() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 103 def deep_symbolize_keys deep_transform_keys { |key| key.to_sym rescue key } end ``` Returns a new hash with all keys converted to symbols, as long as they respond to `to_sym`. This includes the keys from the root hash and from all nested hashes and arrays. ``` hash = { 'person' => { 'name' => 'Rob', 'age' => '28' } } hash.deep_symbolize_keys # => {:person=>{:name=>"Rob", :age=>"28"}} ``` deep\_symbolize\_keys!() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 110 def deep_symbolize_keys! deep_transform_keys! { |key| key.to_sym rescue key } end ``` Destructively converts all keys to symbols, as long as they respond to `to_sym`. This includes the keys from the root hash and from all nested hashes and arrays. deep\_transform\_keys(&block) Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 65 def deep_transform_keys(&block) _deep_transform_keys_in_object(self, &block) end ``` Returns a new hash with all keys converted by the block operation. This includes the keys from the root hash and from all nested hashes and arrays. ``` hash = { person: { name: 'Rob', age: '28' } } hash.deep_transform_keys{ |key| key.to_s.upcase } # => {"PERSON"=>{"NAME"=>"Rob", "AGE"=>"28"}} ``` deep\_transform\_keys!(&block) Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 72 def deep_transform_keys!(&block) _deep_transform_keys_in_object!(self, &block) end ``` Destructively converts all keys by using the block operation. This includes the keys from the root hash and from all nested hashes and arrays. deep\_transform\_values(&block) Show source ``` # File activesupport/lib/active_support/core_ext/hash/deep_transform_values.rb, line 12 def deep_transform_values(&block) _deep_transform_values_in_object(self, &block) end ``` Returns a new hash with all values converted by the block operation. This includes the values from the root hash and from all nested hashes and arrays. ``` hash = { person: { name: 'Rob', age: '28' } } hash.deep_transform_values{ |value| value.to_s.upcase } # => {person: {name: "ROB", age: "28"}} ``` deep\_transform\_values!(&block) Show source ``` # File activesupport/lib/active_support/core_ext/hash/deep_transform_values.rb, line 19 def deep_transform_values!(&block) _deep_transform_values_in_object!(self, &block) end ``` Destructively converts all values by using the block operation. This includes the values from the root hash and from all nested hashes and arrays. except(\*keys) Show source ``` # File activesupport/lib/active_support/core_ext/hash/except.rb, line 12 def except(*keys) slice(*self.keys - keys) end ``` Returns a hash that includes everything except given keys. ``` hash = { a: true, b: false, c: nil } hash.except(:c) # => { a: true, b: false } hash.except(:a, :b) # => { c: nil } hash # => { a: true, b: false, c: nil } ``` This is useful for limiting a set of parameters to everything but a few known toggles: ``` @person.update(params[:person].except(:admin)) ``` except!(\*keys) Show source ``` # File activesupport/lib/active_support/core_ext/hash/except.rb, line 20 def except!(*keys) keys.each { |key| delete(key) } self end ``` Removes the given keys from hash and returns it. ``` hash = { a: true, b: false, c: nil } hash.except!(:c) # => { a: true, b: false } hash # => { a: true, b: false } ``` extract!(\*keys) Show source ``` # File activesupport/lib/active_support/core_ext/hash/slice.rb, line 24 def extract!(*keys) keys.each_with_object(self.class.new) { |key, result| result[key] = delete(key) if has_key?(key) } end ``` Removes and returns the key/value pairs matching the given keys. ``` hash = { a: 1, b: 2, c: 3, d: 4 } hash.extract!(:a, :b) # => {:a=>1, :b=>2} hash # => {:c=>3, :d=>4} ``` extractable\_options?() Show source ``` # File activesupport/lib/active_support/core_ext/array/extract_options.rb, line 9 def extractable_options? instance_of?(Hash) end ``` By default, only instances of [`Hash`](hash) itself are extractable. Subclasses of [`Hash`](hash) may implement this method and return true to declare themselves as extractable. If a [`Hash`](hash) is extractable, [`Array#extract_options!`](array#method-i-extract_options-21) pops it from the [`Array`](array) when it is the last element of the [`Array`](array). nested\_under\_indifferent\_access() Called when object is nested under an object that receives [`with_indifferent_access`](hash#method-i-with_indifferent_access). This method will be called on the current object by the enclosing object and is aliased to [`with_indifferent_access`](hash#method-i-with_indifferent_access) by default. Subclasses of [`Hash`](hash) may overwrite this method to return `self` if converting to an `ActiveSupport::HashWithIndifferentAccess` would not be desirable. ``` b = { b: 1 } { a: b }.with_indifferent_access['a'] # calls b.nested_under_indifferent_access # => {"b"=>1} ``` Alias for: [with\_indifferent\_access](hash#method-i-with_indifferent_access) reverse\_merge(other\_hash) Show source ``` # File activesupport/lib/active_support/core_ext/hash/reverse_merge.rb, line 14 def reverse_merge(other_hash) other_hash.merge(self) end ``` Merges the caller into `other_hash`. For example, ``` options = options.reverse_merge(size: 25, velocity: 10) ``` is equivalent to ``` options = { size: 25, velocity: 10 }.merge(options) ``` This is particularly useful for initializing an options hash with default values. Also aliased as: [with\_defaults](hash#method-i-with_defaults) reverse\_merge!(other\_hash) Show source ``` # File activesupport/lib/active_support/core_ext/hash/reverse_merge.rb, line 20 def reverse_merge!(other_hash) replace(reverse_merge(other_hash)) end ``` Destructive `reverse_merge`. Also aliased as: [reverse\_update](hash#method-i-reverse_update), [with\_defaults!](hash#method-i-with_defaults-21) reverse\_update(other\_hash) Alias for: [reverse\_merge!](hash#method-i-reverse_merge-21) slice!(\*keys) Show source ``` # File activesupport/lib/active_support/core_ext/hash/slice.rb, line 10 def slice!(*keys) omit = slice(*self.keys - keys) hash = slice(*keys) hash.default = default hash.default_proc = default_proc if default_proc replace(hash) omit end ``` Replaces the hash with only the given keys. Returns a hash containing the removed key/value pairs. ``` hash = { a: 1, b: 2, c: 3, d: 4 } hash.slice!(:a, :b) # => {:c=>3, :d=>4} hash # => {:a=>1, :b=>2} ``` stringify\_keys() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 10 def stringify_keys transform_keys(&:to_s) end ``` Returns a new hash with all keys converted to strings. ``` hash = { name: 'Rob', age: '28' } hash.stringify_keys # => {"name"=>"Rob", "age"=>"28"} ``` stringify\_keys!() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 16 def stringify_keys! transform_keys!(&:to_s) end ``` Destructively converts all keys to strings. Same as `stringify_keys`, but modifies `self`. symbolize\_keys() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 27 def symbolize_keys transform_keys { |key| key.to_sym rescue key } end ``` Returns a new hash with all keys converted to symbols, as long as they respond to `to_sym`. ``` hash = { 'name' => 'Rob', 'age' => '28' } hash.symbolize_keys # => {:name=>"Rob", :age=>"28"} ``` Also aliased as: [to\_options](hash#method-i-to_options) symbolize\_keys!() Show source ``` # File activesupport/lib/active_support/core_ext/hash/keys.rb, line 34 def symbolize_keys! transform_keys! { |key| key.to_sym rescue key } end ``` Destructively converts all keys to symbols, as long as they respond to `to_sym`. Same as `symbolize_keys`, but modifies `self`. Also aliased as: [to\_options!](hash#method-i-to_options-21) to\_options() Alias for: [symbolize\_keys](hash#method-i-symbolize_keys) to\_options!() Alias for: [symbolize\_keys!](hash#method-i-symbolize_keys-21) to\_param(namespace = nil) Alias for: [to\_query](hash#method-i-to_query) to\_query(namespace = nil) Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 77 def to_query(namespace = nil) query = filter_map do |key, value| unless (value.is_a?(Hash) || value.is_a?(Array)) && value.empty? value.to_query(namespace ? "#{namespace}[#{key}]" : key) end end query.sort! unless namespace.to_s.include?("[]") query.join("&") end ``` Returns a string representation of the receiver suitable for use as a URL query string: ``` {name: 'David', nationality: 'Danish'}.to_query # => "name=David&nationality=Danish" ``` An optional namespace can be passed to enclose key names: ``` {name: 'David', nationality: 'Danish'}.to_query('user') # => "user%5Bname%5D=David&user%5Bnationality%5D=Danish" ``` The string pairs “key=value” that conform the query string are sorted lexicographically in ascending order. This method is also aliased as `to_param`. Also aliased as: [to\_param](hash#method-i-to_param) to\_xml(options = {}) { |builder| ... } Show source ``` # File activesupport/lib/active_support/core_ext/hash/conversions.rb, line 75 def to_xml(options = {}) require "active_support/builder" unless defined?(Builder::XmlMarkup) options = options.dup options[:indent] ||= 2 options[:root] ||= "hash" options[:builder] ||= Builder::XmlMarkup.new(indent: options[:indent]) builder = options[:builder] builder.instruct! unless options.delete(:skip_instruct) root = ActiveSupport::XmlMini.rename_key(options[:root].to_s, options) builder.tag!(root) do each { |key, value| ActiveSupport::XmlMini.to_tag(key, value, options) } yield builder if block_given? end end ``` Returns a string containing an XML representation of its receiver: ``` { foo: 1, bar: 2 }.to_xml # => # <?xml version="1.0" encoding="UTF-8"?> # <hash> # <foo type="integer">1</foo> # <bar type="integer">2</bar> # </hash> ``` To do so, the method loops over the pairs and builds nodes that depend on the *values*. Given a pair `key`, `value`: * If `value` is a hash there's a recursive call with `key` as `:root`. * If `value` is an array there's a recursive call with `key` as `:root`, and `key` singularized as `:children`. * If `value` is a callable object it must expect one or two arguments. Depending on the arity, the callable is invoked with the `options` hash as first argument with `key` as `:root`, and `key` singularized as second argument. The callable can add nodes by using `options[:builder]`. ``` {foo: lambda { |options, key| options[:builder].b(key) }}.to_xml # => "<b>foo</b>" ``` * If `value` responds to `to_xml` the method is invoked with `key` as `:root`. ``` class Foo def to_xml(options) options[:builder].bar 'fooing!' end end { foo: Foo.new }.to_xml(skip_instruct: true) # => # <hash> # <bar>fooing!</bar> # </hash> ``` * Otherwise, a node with `key` as tag is created with a string representation of `value` as text node. If `value` is `nil` an attribute “nil” set to “true” is added. Unless the option `:skip_types` exists and is true, an attribute “type” is added as well according to the following mapping: ``` XML_TYPE_NAMES = { "Symbol" => "symbol", "Integer" => "integer", "BigDecimal" => "decimal", "Float" => "float", "TrueClass" => "boolean", "FalseClass" => "boolean", "Date" => "date", "DateTime" => "dateTime", "Time" => "dateTime" } ``` By default the root node is “hash”, but that's configurable via the `:root` option. The default XML builder is a fresh instance of `Builder::XmlMarkup`. You can configure your own builder with the `:builder` option. The method also accepts options like `:dasherize` and friends, they are forwarded to the builder. with\_defaults(other\_hash) Alias for: [reverse\_merge](hash#method-i-reverse_merge) with\_defaults!(other\_hash) Alias for: [reverse\_merge!](hash#method-i-reverse_merge-21) with\_indifferent\_access() Show source ``` # File activesupport/lib/active_support/core_ext/hash/indifferent_access.rb, line 9 def with_indifferent_access ActiveSupport::HashWithIndifferentAccess.new(self) end ``` Returns an `ActiveSupport::HashWithIndifferentAccess` out of its receiver: ``` { a: 1 }.with_indifferent_access['a'] # => 1 ``` Also aliased as: [nested\_under\_indifferent\_access](hash#method-i-nested_under_indifferent_access) rails class UnboundMethod class UnboundMethod ==================== Parent: [Object](object) duplicable?() Show source ``` # File activesupport/lib/active_support/core_ext/object/duplicable.rb, line 46 def duplicable? false end ``` Unbound methods are not duplicable: ``` method(:puts).unbind.duplicable? # => false method(:puts).unbind.dup # => TypeError: allocator undefined for UnboundMethod ``` rails module Singleton module Singleton ================= duplicable?() Show source ``` # File activesupport/lib/active_support/core_ext/object/duplicable.rb, line 57 def duplicable? false end ``` [`Singleton`](singleton) instances are not duplicable: Class.new.include([`Singleton`](singleton)).instance.dup # TypeError (can't dup instance of singleton rails module ActionText module ActionText ================== gem\_version() Show source ``` # File actiontext/lib/action_text/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the currently-loaded version of Action Text as a `Gem::Version`. version() Show source ``` # File actiontext/lib/action_text/version.rb, line 7 def self.version gem_version end ``` Returns the currently-loaded version of Action Text as a `Gem::Version`.
programming_docs
rails class Time class Time =========== Parent: [Object](object) Included modules: [DateAndTime::Calculations](dateandtime/calculations), DateAndTime::Compatibility, [DateAndTime::Zones](dateandtime/zones) COMMON\_YEAR\_DAYS\_IN\_MONTH DATE\_FORMATS zone\_default[RW] ===(other) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 18 def ===(other) super || (self == Time && other.is_a?(ActiveSupport::TimeWithZone)) end ``` Overriding case equality method so that it returns true for [`ActiveSupport::TimeWithZone`](activesupport/timewithzone) instances Calls superclass method at(\*args, \*\*kwargs) Also aliased as: [at\_without\_coercion](time#method-c-at_without_coercion) Alias for: [at\_with\_coercion](time#method-c-at_with_coercion) at\_with\_coercion(\*args, \*\*kwargs) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 45 def at_with_coercion(*args, **kwargs) return at_without_coercion(*args, **kwargs) if args.size != 1 || !kwargs.empty? # Time.at can be called with a time or numerical value time_or_number = args.first if time_or_number.is_a?(ActiveSupport::TimeWithZone) at_without_coercion(time_or_number.to_r).getlocal elsif time_or_number.is_a?(DateTime) at_without_coercion(time_or_number.to_f).getlocal else at_without_coercion(time_or_number) end end ``` Layers additional behavior on [`Time.at`](time#method-c-at) so that [`ActiveSupport::TimeWithZone`](activesupport/timewithzone) and [`DateTime`](datetime) instances can be used when called with a single argument Also aliased as: [at](time#method-c-at) at\_without\_coercion(\*args, \*\*kwargs) Alias for: [at](time#method-c-at) current() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 39 def current ::Time.zone ? ::Time.zone.now : ::Time.now end ``` Returns `Time.zone.now` when `Time.zone` or `config.time_zone` are set, otherwise just returns `Time.now`. days\_in\_month(month, year = current.year) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 24 def days_in_month(month, year = current.year) if month == 2 && ::Date.gregorian_leap?(year) 29 else COMMON_YEAR_DAYS_IN_MONTH[month] end end ``` Returns the number of days in the given month. If no year is specified, it will use the current year. days\_in\_year(year = current.year) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 34 def days_in_year(year = current.year) days_in_month(2, year) + 337 end ``` Returns the number of days in the given year. If no year is specified, it will use the current year. find\_zone(time\_zone) Show source ``` # File activesupport/lib/active_support/core_ext/time/zones.rb, line 94 def find_zone(time_zone) find_zone!(time_zone) rescue nil end ``` Returns a TimeZone instance matching the time zone provided. Accepts the time zone in any format supported by `Time.zone=`. Returns `nil` for invalid time zones. ``` Time.find_zone "America/New_York" # => #<ActiveSupport::TimeZone @name="America/New_York" ...> Time.find_zone "NOT-A-TIMEZONE" # => nil ``` find\_zone!(time\_zone) Show source ``` # File activesupport/lib/active_support/core_ext/time/zones.rb, line 82 def find_zone!(time_zone) return time_zone unless time_zone ActiveSupport::TimeZone[time_zone] || raise(ArgumentError, "Invalid Timezone: #{time_zone}") end ``` Returns a TimeZone instance matching the time zone provided. Accepts the time zone in any format supported by `Time.zone=`. Raises an `ArgumentError` for invalid time zones. ``` Time.find_zone! "America/New_York" # => #<ActiveSupport::TimeZone @name="America/New_York" ...> Time.find_zone! "EST" # => #<ActiveSupport::TimeZone @name="EST" ...> Time.find_zone! -5.hours # => #<ActiveSupport::TimeZone @name="Bogota" ...> Time.find_zone! nil # => nil Time.find_zone! false # => false Time.find_zone! "NOT-A-TIMEZONE" # => ArgumentError: Invalid Timezone: NOT-A-TIMEZONE ``` rfc3339(str) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 69 def rfc3339(str) parts = Date._rfc3339(str) raise ArgumentError, "invalid date" if parts.empty? Time.new( parts.fetch(:year), parts.fetch(:mon), parts.fetch(:mday), parts.fetch(:hour), parts.fetch(:min), parts.fetch(:sec) + parts.fetch(:sec_fraction, 0), parts.fetch(:offset) ) end ``` Creates a `Time` instance from an RFC 3339 string. ``` Time.rfc3339('1999-12-31T14:00:00-10:00') # => 2000-01-01 00:00:00 -1000 ``` If the time or offset components are missing then an `ArgumentError` will be raised. ``` Time.rfc3339('1999-12-31') # => ArgumentError: invalid date ``` use\_zone(time\_zone) { || ... } Show source ``` # File activesupport/lib/active_support/core_ext/time/zones.rb, line 62 def use_zone(time_zone) new_zone = find_zone!(time_zone) begin old_zone, ::Time.zone = ::Time.zone, new_zone yield ensure ::Time.zone = old_zone end end ``` Allows override of `Time.zone` locally inside supplied block; resets `Time.zone` to existing value when done. ``` class ApplicationController < ActionController::Base around_action :set_time_zone private def set_time_zone Time.use_zone(current_user.timezone) { yield } end end ``` NOTE: This won't affect any `ActiveSupport::TimeWithZone` objects that have already been created, e.g. any model timestamp attributes that have been read before the block will remain in the application's default timezone. zone() Show source ``` # File activesupport/lib/active_support/core_ext/time/zones.rb, line 14 def zone ::ActiveSupport::IsolatedExecutionState[:time_zone] || zone_default end ``` Returns the TimeZone for the current request, if this has been set (via [`Time.zone=`](time#method-c-zone-3D)). If `Time.zone` has not been set for the current request, returns the TimeZone specified in `config.time_zone`. zone=(time\_zone) Show source ``` # File activesupport/lib/active_support/core_ext/time/zones.rb, line 41 def zone=(time_zone) ::ActiveSupport::IsolatedExecutionState[:time_zone] = find_zone!(time_zone) end ``` Sets `Time.zone` to a TimeZone object for the current request/thread. This method accepts any of the following: * A Rails TimeZone object. * An identifier for a Rails TimeZone object (e.g., “Eastern [`Time`](time) (US & Canada)”, `-5.hours`). * A TZInfo::Timezone object. * An identifier for a TZInfo::Timezone object (e.g., “America/New\_York”). Here's an example of how you might set `Time.zone` on a per request basis and reset it when the request is done. `current_user.time_zone` just needs to return a string identifying the user's preferred time zone: ``` class ApplicationController < ActionController::Base around_action :set_time_zone def set_time_zone if logged_in? Time.use_zone(current_user.time_zone) { yield } else yield end end end ``` -(other) Also aliased as: [minus\_without\_duration](time#method-i-minus_without_duration), [minus\_without\_coercion](time#method-i-minus_without_coercion) Alias for: [minus\_with\_coercion](time#method-i-minus_with_coercion) <=>(other) Also aliased as: [compare\_without\_coercion](time#method-i-compare_without_coercion) Alias for: [compare\_with\_coercion](time#method-i-compare_with_coercion) acts\_like\_time?() Show source ``` # File activesupport/lib/active_support/core_ext/time/acts_like.rb, line 7 def acts_like_time? true end ``` Duck-types as a Time-like class. See [`Object#acts_like?`](object#method-i-acts_like-3F). advance(options) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 182 def advance(options) unless options[:weeks].nil? options[:weeks], partial_weeks = options[:weeks].divmod(1) options[:days] = options.fetch(:days, 0) + 7 * partial_weeks end unless options[:days].nil? options[:days], partial_days = options[:days].divmod(1) options[:hours] = options.fetch(:hours, 0) + 24 * partial_days end d = to_date.gregorian.advance(options) time_advanced_by_date = change(year: d.year, month: d.month, day: d.day) seconds_to_advance = \ options.fetch(:seconds, 0) + options.fetch(:minutes, 0) * 60 + options.fetch(:hours, 0) * 3600 if seconds_to_advance.zero? time_advanced_by_date else time_advanced_by_date.since(seconds_to_advance) end end ``` Uses [`Date`](date) to provide precise [`Time`](time) calculations for years, months, and days according to the proleptic Gregorian calendar. The `options` parameter takes a hash with any of these keys: `:years`, `:months`, `:weeks`, `:days`, `:hours`, `:minutes`, `:seconds`. ``` Time.new(2015, 8, 1, 14, 35, 0).advance(seconds: 1) # => 2015-08-01 14:35:01 -0700 Time.new(2015, 8, 1, 14, 35, 0).advance(minutes: 1) # => 2015-08-01 14:36:00 -0700 Time.new(2015, 8, 1, 14, 35, 0).advance(hours: 1) # => 2015-08-01 15:35:00 -0700 Time.new(2015, 8, 1, 14, 35, 0).advance(days: 1) # => 2015-08-02 14:35:00 -0700 Time.new(2015, 8, 1, 14, 35, 0).advance(weeks: 1) # => 2015-08-08 14:35:00 -0700 ``` ago(seconds) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 208 def ago(seconds) since(-seconds) end ``` Returns a new [`Time`](time) representing the time a number of seconds ago, this is basically a wrapper around the [`Numeric`](numeric) extension at\_beginning\_of\_day() Alias for: [beginning\_of\_day](time#method-i-beginning_of_day) at\_beginning\_of\_hour() Alias for: [beginning\_of\_hour](time#method-i-beginning_of_hour) at\_beginning\_of\_minute() Alias for: [beginning\_of\_minute](time#method-i-beginning_of_minute) at\_end\_of\_day() Alias for: [end\_of\_day](time#method-i-end_of_day) at\_end\_of\_hour() Alias for: [end\_of\_hour](time#method-i-end_of_hour) at\_end\_of\_minute() Alias for: [end\_of\_minute](time#method-i-end_of_minute) at\_midday() Alias for: [middle\_of\_day](time#method-i-middle_of_day) at\_middle\_of\_day() Alias for: [middle\_of\_day](time#method-i-middle_of_day) at\_midnight() Alias for: [beginning\_of\_day](time#method-i-beginning_of_day) at\_noon() Alias for: [middle\_of\_day](time#method-i-middle_of_day) beginning\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 221 def beginning_of_day change(hour: 0) end ``` Returns a new [`Time`](time) representing the start of the day (0:00) Also aliased as: [midnight](time#method-i-midnight), [at\_midnight](time#method-i-at_midnight), [at\_beginning\_of\_day](time#method-i-at_beginning_of_day) beginning\_of\_hour() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 250 def beginning_of_hour change(min: 0) end ``` Returns a new [`Time`](time) representing the start of the hour (x:00) Also aliased as: [at\_beginning\_of\_hour](time#method-i-at_beginning_of_hour) beginning\_of\_minute() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 266 def beginning_of_minute change(sec: 0) end ``` Returns a new [`Time`](time) representing the start of the minute (x:xx:00) Also aliased as: [at\_beginning\_of\_minute](time#method-i-at_beginning_of_minute) ceil(precision = 0) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 121 def ceil(precision = 0) change(nsec: 0) + subsec.ceil(precision) end ``` change(options) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 138 def change(options) new_year = options.fetch(:year, year) new_month = options.fetch(:month, month) new_day = options.fetch(:day, day) new_hour = options.fetch(:hour, hour) new_min = options.fetch(:min, options[:hour] ? 0 : min) new_sec = options.fetch(:sec, (options[:hour] || options[:min]) ? 0 : sec) new_offset = options.fetch(:offset, nil) if new_nsec = options[:nsec] raise ArgumentError, "Can't change both :nsec and :usec at the same time: #{options.inspect}" if options[:usec] new_usec = Rational(new_nsec, 1000) else new_usec = options.fetch(:usec, (options[:hour] || options[:min] || options[:sec]) ? 0 : Rational(nsec, 1000)) end raise ArgumentError, "argument out of range" if new_usec >= 1000000 new_sec += Rational(new_usec, 1000000) if new_offset ::Time.new(new_year, new_month, new_day, new_hour, new_min, new_sec, new_offset) elsif utc? ::Time.utc(new_year, new_month, new_day, new_hour, new_min, new_sec) elsif zone&.respond_to?(:utc_to_local) ::Time.new(new_year, new_month, new_day, new_hour, new_min, new_sec, zone) elsif zone ::Time.local(new_year, new_month, new_day, new_hour, new_min, new_sec) else ::Time.new(new_year, new_month, new_day, new_hour, new_min, new_sec, utc_offset) end end ``` Returns a new [`Time`](time) where one or more of the elements have been changed according to the `options` parameter. The time options (`:hour`, `:min`, `:sec`, `:usec`, `:nsec`) reset cascadingly, so if only the hour is passed, then minute, sec, usec and nsec is set to 0. If the hour and minute is passed, then sec, usec and nsec is set to 0. The `options` parameter takes a hash with any of these keys: `:year`, `:month`, `:day`, `:hour`, `:min`, `:sec`, `:usec`, `:nsec`, `:offset`. Pass either `:usec` or `:nsec`, not both. ``` Time.new(2012, 8, 29, 22, 35, 0).change(day: 1) # => Time.new(2012, 8, 1, 22, 35, 0) Time.new(2012, 8, 29, 22, 35, 0).change(year: 1981, day: 1) # => Time.new(1981, 8, 1, 22, 35, 0) Time.new(2012, 8, 29, 22, 35, 0).change(year: 1981, hour: 0) # => Time.new(1981, 8, 29, 0, 0, 0) ``` compare\_with\_coercion(other) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 312 def compare_with_coercion(other) # we're avoiding Time#to_datetime and Time#to_time because they're expensive if other.class == Time compare_without_coercion(other) elsif other.is_a?(Time) compare_without_coercion(other.to_time) else to_datetime <=> other end end ``` Layers additional behavior on Time#<=> so that [`DateTime`](datetime) and [`ActiveSupport::TimeWithZone`](activesupport/timewithzone) instances can be chronologically compared with a [`Time`](time) Also aliased as: [<=>](time#method-i-3C-3D-3E) compare\_without\_coercion(other) Alias for: [<=>](time#method-i-3C-3D-3E) end\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 239 def end_of_day change( hour: 23, min: 59, sec: 59, usec: Rational(999999999, 1000) ) end ``` Returns a new [`Time`](time) representing the end of the day, 23:59:59.999999 Also aliased as: [at\_end\_of\_day](time#method-i-at_end_of_day) end\_of\_hour() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 256 def end_of_hour change( min: 59, sec: 59, usec: Rational(999999999, 1000) ) end ``` Returns a new [`Time`](time) representing the end of the hour, x:59:59.999999 Also aliased as: [at\_end\_of\_hour](time#method-i-at_end_of_hour) end\_of\_minute() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 272 def end_of_minute change( sec: 59, usec: Rational(999999999, 1000) ) end ``` Returns a new [`Time`](time) representing the end of the minute, x:xx:59.999999 Also aliased as: [at\_end\_of\_minute](time#method-i-at_end_of_minute) eql?(other) Also aliased as: [eql\_without\_coercion](time#method-i-eql_without_coercion) Alias for: [eql\_with\_coercion](time#method-i-eql_with_coercion) eql\_with\_coercion(other) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 327 def eql_with_coercion(other) # if other is an ActiveSupport::TimeWithZone, coerce a Time instance from it so we can do eql? comparison other = other.comparable_time if other.respond_to?(:comparable_time) eql_without_coercion(other) end ``` Layers additional behavior on [`Time#eql?`](time#method-i-eql-3F) so that [`ActiveSupport::TimeWithZone`](activesupport/timewithzone) instances can be eql? to an equivalent [`Time`](time) Also aliased as: [eql?](time#method-i-eql-3F) eql\_without\_coercion(other) Alias for: [eql?](time#method-i-eql-3F) floor(precision = 0) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 112 def floor(precision = 0) change(nsec: 0) + subsec.floor(precision) end ``` formatted\_offset(colon = true, alternate\_utc\_string = nil) Show source ``` # File activesupport/lib/active_support/core_ext/time/conversions.rb, line 69 def formatted_offset(colon = true, alternate_utc_string = nil) utc? && alternate_utc_string || ActiveSupport::TimeZone.seconds_to_utc_offset(utc_offset, colon) end ``` Returns a formatted string of the offset from UTC, or an alternative string if the time zone is already UTC. ``` Time.local(2000).formatted_offset # => "-06:00" Time.local(2000).formatted_offset(false) # => "-0600" ``` in(seconds) Alias for: [since](time#method-i-since) midday() Alias for: [middle\_of\_day](time#method-i-middle_of_day) middle\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 229 def middle_of_day change(hour: 12) end ``` Returns a new [`Time`](time) representing the middle of the day (12:00) Also aliased as: [midday](time#method-i-midday), [noon](time#method-i-noon), [at\_midday](time#method-i-at_midday), [at\_noon](time#method-i-at_noon), [at\_middle\_of\_day](time#method-i-at_middle_of_day) midnight() Alias for: [beginning\_of\_day](time#method-i-beginning_of_day) minus\_with\_coercion(other) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 303 def minus_with_coercion(other) other = other.comparable_time if other.respond_to?(:comparable_time) other.is_a?(DateTime) ? to_f - other.to_f : minus_without_coercion(other) end ``` [`Time#-`](time#method-i-2D) can also be used to determine the number of seconds between two [`Time`](time) instances. We're layering on additional behavior so that [`ActiveSupport::TimeWithZone`](activesupport/timewithzone) instances are coerced into values that [`Time#-`](time#method-i-2D) will recognize Also aliased as: [-](time#method-i-2D) minus\_without\_coercion(other) Alias for: [-](time#method-i-2D) minus\_without\_duration(other) Alias for: [-](time#method-i-2D) next\_day(days = 1) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 341 def next_day(days = 1) advance(days: days) end ``` Returns a new time the specified number of days in the future. next\_month(months = 1) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 351 def next_month(months = 1) advance(months: months) end ``` Returns a new time the specified number of months in the future. next\_year(years = 1) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 361 def next_year(years = 1) advance(years: years) end ``` Returns a new time the specified number of years in the future. noon() Alias for: [middle\_of\_day](time#method-i-middle_of_day) prev\_day(days = 1) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 336 def prev_day(days = 1) advance(days: -days) end ``` Returns a new time the specified number of days ago. prev\_month(months = 1) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 346 def prev_month(months = 1) advance(months: -months) end ``` Returns a new time the specified number of months ago. prev\_year(years = 1) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 356 def prev_year(years = 1) advance(years: -years) end ``` Returns a new time the specified number of years ago. sec\_fraction() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 107 def sec_fraction subsec end ``` Returns the fraction of a second as a `Rational` ``` Time.new(2012, 8, 29, 0, 0, 0.5).sec_fraction # => (1/2) ``` seconds\_since\_midnight() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 91 def seconds_since_midnight to_i - change(hour: 0).to_i + (usec / 1.0e+6) end ``` Returns the number of seconds since 00:00:00. ``` Time.new(2012, 8, 29, 0, 0, 0).seconds_since_midnight # => 0.0 Time.new(2012, 8, 29, 12, 34, 56).seconds_since_midnight # => 45296.0 Time.new(2012, 8, 29, 23, 59, 59).seconds_since_midnight # => 86399.0 ``` seconds\_until\_end\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 100 def seconds_until_end_of_day end_of_day.to_i - to_i end ``` Returns the number of seconds until 23:59:59. ``` Time.new(2012, 8, 29, 0, 0, 0).seconds_until_end_of_day # => 86399 Time.new(2012, 8, 29, 12, 34, 56).seconds_until_end_of_day # => 41103 Time.new(2012, 8, 29, 23, 59, 59).seconds_until_end_of_day # => 0 ``` since(seconds) Show source ``` # File activesupport/lib/active_support/core_ext/time/calculations.rb, line 213 def since(seconds) self + seconds rescue to_datetime.since(seconds) end ``` Returns a new [`Time`](time) representing the time a number of seconds since the instance time Also aliased as: [in](time#method-i-in) to\_formatted\_s(format = :default) Show source ``` # File activesupport/lib/active_support/core_ext/time/conversions.rb, line 53 def to_formatted_s(format = :default) if formatter = DATE_FORMATS[format] formatter.respond_to?(:call) ? formatter.call(self).to_s : strftime(formatter) else # Change to `to_s` when deprecation is gone. Also deprecate `to_default_s`. to_default_s end end ``` Converts to a formatted string. See [`DATE_FORMATS`](time#DATE_FORMATS) for built-in formats. This method is aliased to `to_fs`. ``` time = Time.now # => 2007-01-18 06:10:17 -06:00 time.to_formatted_s(:time) # => "06:10" time.to_fs(:time) # => "06:10" time.to_formatted_s(:db) # => "2007-01-18 06:10:17" time.to_formatted_s(:number) # => "20070118061017" time.to_formatted_s(:short) # => "18 Jan 06:10" time.to_formatted_s(:long) # => "January 18, 2007 06:10" time.to_formatted_s(:long_ordinal) # => "January 18th, 2007 06:10" time.to_formatted_s(:rfc822) # => "Thu, 18 Jan 2007 06:10:17 -0600" time.to_formatted_s(:iso8601) # => "2007-01-18T06:10:17-06:00" ``` Adding your own time formats to `to_formatted_s` ------------------------------------------------ You can add your own formats to the [`Time::DATE_FORMATS`](time#DATE_FORMATS) hash. Use the format name as the hash key and either a strftime string or Proc instance that takes a time argument as the value. ``` # config/initializers/time_formats.rb Time::DATE_FORMATS[:month_and_year] = '%B %Y' Time::DATE_FORMATS[:short_ordinal] = ->(time) { time.strftime("%B #{time.day.ordinalize}") } ``` Also aliased as: [to\_fs](time#method-i-to_fs) to\_fs(format = :default) Alias for: [to\_formatted\_s](time#method-i-to_formatted_s) to\_time() Show source ``` # File activesupport/lib/active_support/core_ext/time/compatibility.rb, line 13 def to_time preserve_timezone ? self : getlocal end ``` Either return `self` or the time in the local system timezone depending on the setting of `ActiveSupport.to_time_preserves_timezone`.
programming_docs
rails class Object class Object ============= Parent: BasicObject Included modules: ActiveRecord::TestFixtures APP\_PATH acts\_like?(duck) Show source ``` # File activesupport/lib/active_support/core_ext/object/acts_like.rb, line 33 def acts_like?(duck) case duck when :time respond_to? :acts_like_time? when :date respond_to? :acts_like_date? when :string respond_to? :acts_like_string? else respond_to? :"acts_like_#{duck}?" end end ``` Provides a way to check whether some class acts like some other class based on the existence of an appropriately-named marker method. A class that provides the same interface as `SomeClass` may define a marker method named `acts_like_some_class?` to signal its compatibility to callers of `acts_like?(:some_class)`. For example, Active Support extends `Date` to define an `acts_like_date?` method, and extends `Time` to define `acts_like_time?`. As a result, developers can call `x.acts_like?(:time)` and `x.acts_like?(:date)` to test duck-type compatibility, and classes that are able to act like `Time` can also define an `acts_like_time?` method to interoperate. Note that the marker method is only expected to exist. It isn't called, so its body or return value are irrelevant. #### Example: A class that provides the same interface as `String` This class may define: ``` class Stringish def acts_like_string? end end ``` Then client code can query for duck-type-safeness this way: ``` Stringish.new.acts_like?(:string) # => true ``` blank?() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 18 def blank? respond_to?(:empty?) ? !!empty? : !self end ``` An object is blank if it's false, empty, or a whitespace string. For example, `nil`, '', ' ', [], {}, and `false` are all blank. This simplifies ``` !address || address.empty? ``` to ``` address.blank? ``` @return [true, false] deep\_dup() Show source ``` # File activesupport/lib/active_support/core_ext/object/deep_dup.rb, line 15 def deep_dup duplicable? ? dup : self end ``` Returns a deep copy of object if it's duplicable. If it's not duplicable, returns `self`. ``` object = Object.new dup = object.deep_dup dup.instance_variable_set(:@a, 1) object.instance_variable_defined?(:@a) # => false dup.instance_variable_defined?(:@a) # => true ``` duplicable?() Show source ``` # File activesupport/lib/active_support/core_ext/object/duplicable.rb, line 26 def duplicable? true end ``` Can you safely dup this object? False for method objects; true otherwise. html\_safe?() Show source ``` # File activesupport/lib/active_support/core_ext/string/output_safety.rb, line 122 def html_safe? false end ``` in?(another\_object) Show source ``` # File activesupport/lib/active_support/core_ext/object/inclusion.rb, line 12 def in?(another_object) another_object.include?(self) rescue NoMethodError raise ArgumentError.new("The parameter passed to #in? must respond to #include?") end ``` Returns true if this object is included in the argument. Argument must be any object which responds to `#include?`. Usage: ``` characters = ["Konata", "Kagami", "Tsukasa"] "Konata".in?(characters) # => true ``` This will throw an `ArgumentError` if the argument doesn't respond to `#include?`. instance\_values() Show source ``` # File activesupport/lib/active_support/core_ext/object/instance_variables.rb, line 14 def instance_values Hash[instance_variables.map { |name| [name[1..-1], instance_variable_get(name)] }] end ``` Returns a hash with string keys that maps instance variable names without “@” to their corresponding values. ``` class C def initialize(x, y) @x, @y = x, y end end C.new(0, 1).instance_values # => {"x" => 0, "y" => 1} ``` instance\_variable\_names() Show source ``` # File activesupport/lib/active_support/core_ext/object/instance_variables.rb, line 27 def instance_variable_names instance_variables.map(&:to_s) end ``` Returns an array of instance variable names as strings including “@”. ``` class C def initialize(x, y) @x, @y = x, y end end C.new(0, 1).instance_variable_names # => ["@y", "@x"] ``` presence() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 45 def presence self if present? end ``` Returns the receiver if it's present otherwise returns `nil`. `object.presence` is equivalent to ``` object.present? ? object : nil ``` For example, something like ``` state = params[:state] if params[:state].present? country = params[:country] if params[:country].present? region = state || country || 'US' ``` becomes ``` region = params[:state].presence || params[:country].presence || 'US' ``` @return [Object] presence\_in(another\_object) Show source ``` # File activesupport/lib/active_support/core_ext/object/inclusion.rb, line 26 def presence_in(another_object) in?(another_object) ? self : nil end ``` Returns the receiver if it's included in the argument otherwise returns `nil`. Argument must be any object which responds to `#include?`. Usage: ``` params[:bucket_type].presence_in %w( project calendar ) ``` This will throw an `ArgumentError` if the argument doesn't respond to `#include?`. @return [Object] present?() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 25 def present? !blank? end ``` An object is present if it's not blank. @return [true, false] to\_param() Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 7 def to_param to_s end ``` Alias of `to_s`. to\_query(key) Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 13 def to_query(key) "#{CGI.escape(key.to_param)}=#{CGI.escape(to_param.to_s)}" end ``` Converts an object into a string suitable for use as a URL query string, using the given `key` as the param name. try(\*args, &block) Show source ``` # File activesupport/lib/active_support/core_ext/object/try.rb, line 39 ``` Invokes the public method whose name goes as first argument just like `public_send` does, except that if the receiver does not respond to it the call returns `nil` rather than raising an exception. This method is defined to be able to write ``` @person.try(:name) ``` instead of ``` @person.name if @person ``` `try` calls can be chained: ``` @person.try(:spouse).try(:name) ``` instead of ``` @person.spouse.name if @person && @person.spouse ``` `try` will also return `nil` if the receiver does not respond to the method: ``` @person.try(:non_existing_method) # => nil ``` instead of ``` @person.non_existing_method if @person.respond_to?(:non_existing_method) # => nil ``` `try` returns `nil` when called on `nil` regardless of whether it responds to the method: ``` nil.try(:to_i) # => nil, rather than 0 ``` Arguments and blocks are forwarded to the method if invoked: ``` @posts.try(:each_slice, 2) do |a, b| ... end ``` The number of arguments in the signature must match. If the object responds to the method the call is attempted and `ArgumentError` is still raised in case of argument mismatch. If `try` is called without arguments it yields the receiver to a given block unless it is `nil`: ``` @person.try do |p| ... end ``` You can also call try with a block without accepting an argument, and the block will be instance\_eval'ed instead: ``` @person.try { upcase.truncate(50) } ``` Please also note that `try` is defined on `Object`. Therefore, it won't work with instances of classes that do not have `Object` among their ancestors, like direct subclasses of `BasicObject`. try!(\*args, &block) Show source ``` # File activesupport/lib/active_support/core_ext/object/try.rb, line 104 ``` Same as [`try`](object#method-i-try), but raises a `NoMethodError` exception if the receiver is not `nil` and does not implement the tried method. ``` "a".try!(:upcase) # => "A" nil.try!(:upcase) # => nil 123.try!(:upcase) # => NoMethodError: undefined method `upcase' for 123:Integer ``` with\_options(options, &block) Show source ``` # File activesupport/lib/active_support/core_ext/object/with_options.rb, line 92 def with_options(options, &block) option_merger = ActiveSupport::OptionMerger.new(self, options) if block block.arity.zero? ? option_merger.instance_eval(&block) : block.call(option_merger) else option_merger end end ``` An elegant way to factor duplication out of options passed to a series of method calls. Each method called in the block, with the block variable as the receiver, will have its options merged with the default `options` hash provided. Each method called on the block variable must take an options hash as its final argument. Without `with_options`, this code contains duplication: ``` class Account < ActiveRecord::Base has_many :customers, dependent: :destroy has_many :products, dependent: :destroy has_many :invoices, dependent: :destroy has_many :expenses, dependent: :destroy end ``` Using `with_options`, we can remove the duplication: ``` class Account < ActiveRecord::Base with_options dependent: :destroy do |assoc| assoc.has_many :customers assoc.has_many :products assoc.has_many :invoices assoc.has_many :expenses end end ``` It can also be used with an explicit receiver: ``` I18n.with_options locale: user.locale, scope: 'newsletter' do |i18n| subject i18n.t :subject body i18n.t :body, user_name: user.name end ``` When you don't pass an explicit receiver, it executes the whole block in merging options context: ``` class Account < ActiveRecord::Base with_options dependent: :destroy do has_many :customers has_many :products has_many :invoices has_many :expenses end end ``` `with_options` can also be nested since the call is forwarded to its receiver. NOTE: Each nesting level will merge inherited defaults in addition to their own. ``` class Post < ActiveRecord::Base with_options if: :persisted?, length: { minimum: 50 } do validates :content, if: -> { content.present? } end end ``` The code is equivalent to: ``` validates :content, length: { minimum: 50 }, if: -> { content.present? } ``` Hence the inherited default for `if` key is ignored. NOTE: You cannot call class methods implicitly inside of with\_options. You can access these methods using the class name instead: ``` class Phone < ActiveRecord::Base enum phone_number_type: { home: 0, office: 1, mobile: 2 } with_options presence: true do validates :phone_number_type, inclusion: { in: Phone.phone_number_types.keys } end end ``` When the block argument is omitted, the decorated [`Object`](object) instance is returned: ``` module MyStyledHelpers def styled with_options style: "color: red;" end end # styled.link_to "I'm red", "/" # #=> <a href="/" style="color: red;">I'm red</a> # styled.button_tag "I'm red too!" # #=> <button style="color: red;">I'm red too!</button> ``` rails class Class class Class ============ Parent: [Object](object) class\_attribute(\*attrs, instance\_accessor: true, instance\_reader: instance\_accessor, instance\_writer: instance\_accessor, instance\_predicate: true, default: nil) Show source ``` # File activesupport/lib/active_support/core_ext/class/attribute.rb, line 85 def class_attribute(*attrs, instance_accessor: true, instance_reader: instance_accessor, instance_writer: instance_accessor, instance_predicate: true, default: nil) class_methods, methods = [], [] attrs.each do |name| unless name.is_a?(Symbol) || name.is_a?(String) raise TypeError, "#{name.inspect} is not a symbol nor a string" end class_methods << <<~RUBY # In case the method exists and is not public silence_redefinition_of_method def #{name} end RUBY methods << <<~RUBY if instance_reader silence_redefinition_of_method def #{name} defined?(@#{name}) ? @#{name} : self.class.#{name} end RUBY class_methods << <<~RUBY silence_redefinition_of_method def #{name}=(value) redefine_method(:#{name}) { value } if singleton_class? redefine_singleton_method(:#{name}) { value } value end RUBY methods << <<~RUBY if instance_writer silence_redefinition_of_method(:#{name}=) attr_writer :#{name} RUBY if instance_predicate class_methods << "silence_redefinition_of_method def #{name}?; !!self.#{name}; end" if instance_reader methods << "silence_redefinition_of_method def #{name}?; !!self.#{name}; end" end end end location = caller_locations(1, 1).first class_eval(["class << self", *class_methods, "end", *methods].join(";").tr("\n", ";"), location.path, location.lineno) attrs.each { |name| public_send("#{name}=", default) } end ``` Declare a class-level attribute whose value is inheritable by subclasses. Subclasses can change their own value and it will not impact parent class. #### Options * `:instance_reader` - Sets the instance reader method (defaults to true). * `:instance_writer` - Sets the instance writer method (defaults to true). * `:instance_accessor` - Sets both instance methods (defaults to true). * `:instance_predicate` - Sets a predicate method (defaults to true). * `:default` - Sets a default value for the attribute (defaults to nil). #### Examples ``` class Base class_attribute :setting end class Subclass < Base end Base.setting = true Subclass.setting # => true Subclass.setting = false Subclass.setting # => false Base.setting # => true ``` In the above case as long as Subclass does not assign a value to setting by performing `Subclass.setting = *something*`, `Subclass.setting` would read value assigned to parent class. Once Subclass assigns a value then the value assigned by Subclass would be returned. This matches normal Ruby method inheritance: think of writing an attribute on a subclass as overriding the reader method. However, you need to be aware when using `class_attribute` with mutable structures as `Array` or `Hash`. In such cases, you don't want to do changes in place. Instead use setters: ``` Base.setting = [] Base.setting # => [] Subclass.setting # => [] # Appending in child changes both parent and child because it is the same object: Subclass.setting << :foo Base.setting # => [:foo] Subclass.setting # => [:foo] # Use setters to not propagate changes: Base.setting = [] Subclass.setting += [:foo] Base.setting # => [] Subclass.setting # => [:foo] ``` For convenience, an instance predicate method is defined as well. To skip it, pass `instance_predicate: false`. ``` Subclass.setting? # => false ``` Instances may overwrite the class value in the same way: ``` Base.setting = true object = Base.new object.setting # => true object.setting = false object.setting # => false Base.setting # => true ``` To opt out of the instance reader method, pass `instance_reader: false`. ``` object.setting # => NoMethodError object.setting? # => NoMethodError ``` To opt out of the instance writer method, pass `instance_writer: false`. ``` object.setting = false # => NoMethodError ``` To opt out of both instance methods, pass `instance_accessor: false`. To set a default value for the attribute, pass `default:`, like so: ``` class_attribute :settings, default: {} ``` descendants() Show source ``` # File activesupport/lib/active_support/core_ext/class/subclasses.rb, line 7 def descendants subclasses.concat(subclasses.flat_map(&:descendants)) end ``` subclasses() Show source ``` # File activesupport/lib/active_support/core_ext/class/subclasses.rb, line 38 def subclasses descendants.select { |descendant| descendant.superclass == self } end ``` Returns an array with the direct children of `self`. ``` class Foo; end class Bar < Foo; end class Baz < Bar; end Foo.subclasses # => [Bar] ``` rails class String class String ============= Parent: [Object](object) [`String`](string) inflections define new methods on the [`String`](string) class to transform names for different purposes. For instance, you can figure out the name of a table from the name of a class. ``` 'ScaleScore'.tableize # => "scale_scores" ``` BLANK\_RE ENCODED\_BLANKS acts\_like\_string?() Show source ``` # File activesupport/lib/active_support/core_ext/string/behavior.rb, line 5 def acts_like_string? true end ``` Enables more predictable duck-typing on String-like classes. See `Object#acts_like?`. at(position) Show source ``` # File activesupport/lib/active_support/core_ext/string/access.rb, line 29 def at(position) self[position] end ``` If you pass a single integer, returns a substring of one character at that position. The first character of the string is at position 0, the next at position 1, and so on. If a range is supplied, a substring containing characters at offsets given by the range is returned. In both cases, if an offset is negative, it is counted from the end of the string. Returns `nil` if the initial offset falls outside the string. Returns an empty string if the beginning of the range is greater than the end of the string. ``` str = "hello" str.at(0) # => "h" str.at(1..3) # => "ell" str.at(-2) # => "l" str.at(-2..-1) # => "lo" str.at(5) # => nil str.at(5..-1) # => "" ``` If a [`Regexp`](regexp) is given, the matching portion of the string is returned. If a [`String`](string) is given, that given string is returned if it occurs in the string. In both cases, `nil` is returned if there is no match. ``` str = "hello" str.at(/lo/) # => "lo" str.at(/ol/) # => nil str.at("lo") # => "lo" str.at("ol") # => nil ``` blank?() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 121 def blank? # The regexp that matches blank strings is expensive. For the case of empty # strings we can speed up this method (~3.5x) with an empty? call. The # penalty for the rest of strings is marginal. empty? || begin BLANK_RE.match?(self) rescue Encoding::CompatibilityError ENCODED_BLANKS[self.encoding].match?(self) end end ``` A string is blank if it's empty or contains whitespaces only: ``` ''.blank? # => true ' '.blank? # => true "\t\n\r".blank? # => true ' blah '.blank? # => false ``` Unicode whitespace is supported: ``` "\u00a0".blank? # => true ``` @return [true, false] camelcase(first\_letter = :upper) Alias for: [camelize](string#method-i-camelize) camelize(first\_letter = :upper) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 103 def camelize(first_letter = :upper) case first_letter when :upper ActiveSupport::Inflector.camelize(self, true) when :lower ActiveSupport::Inflector.camelize(self, false) else raise ArgumentError, "Invalid option, use either :upper or :lower." end end ``` By default, `camelize` converts strings to UpperCamelCase. If the argument to camelize is set to `:lower` then camelize produces lowerCamelCase. `camelize` will also convert '/' to '::' which is useful for converting paths to namespaces. ``` 'active_record'.camelize # => "ActiveRecord" 'active_record'.camelize(:lower) # => "activeRecord" 'active_record/errors'.camelize # => "ActiveRecord::Errors" 'active_record/errors'.camelize(:lower) # => "activeRecord::Errors" ``` `camelize` is also aliased as `camelcase`. See [`ActiveSupport::Inflector.camelize`](activesupport/inflector#method-i-camelize). Also aliased as: [camelcase](string#method-i-camelcase) classify() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 243 def classify ActiveSupport::Inflector.classify(self) end ``` Creates a class name from a plural table name like Rails does for table names to models. Note that this returns a string and not a class. (To convert to an actual class follow `classify` with `constantize`.) ``` 'ham_and_eggs'.classify # => "HamAndEgg" 'posts'.classify # => "Post" ``` See [`ActiveSupport::Inflector.classify`](activesupport/inflector#method-i-classify). constantize() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 73 def constantize ActiveSupport::Inflector.constantize(self) end ``` `constantize` tries to find a declared constant with the name specified in the string. It raises a [`NameError`](nameerror) when the name is not in CamelCase or is not initialized. ``` 'Module'.constantize # => Module 'Class'.constantize # => Class 'blargle'.constantize # => NameError: wrong constant name blargle ``` See [`ActiveSupport::Inflector.constantize`](activesupport/inflector#method-i-constantize). dasherize() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 152 def dasherize ActiveSupport::Inflector.dasherize(self) end ``` Replaces underscores with dashes in the string. ``` 'puni_puni'.dasherize # => "puni-puni" ``` See [`ActiveSupport::Inflector.dasherize`](activesupport/inflector#method-i-dasherize). deconstantize() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 181 def deconstantize ActiveSupport::Inflector.deconstantize(self) end ``` Removes the rightmost segment from the constant expression in the string. ``` 'Net::HTTP'.deconstantize # => "Net" '::Net::HTTP'.deconstantize # => "::Net" 'String'.deconstantize # => "" '::String'.deconstantize # => "" ''.deconstantize # => "" ``` See [`ActiveSupport::Inflector.deconstantize`](activesupport/inflector#method-i-deconstantize). See also `demodulize`. demodulize() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 166 def demodulize ActiveSupport::Inflector.demodulize(self) end ``` Removes the module part from the constant expression in the string. ``` 'ActiveSupport::Inflector::Inflections'.demodulize # => "Inflections" 'Inflections'.demodulize # => "Inflections" '::Inflections'.demodulize # => "Inflections" ''.demodulize # => '' ``` See [`ActiveSupport::Inflector.demodulize`](activesupport/inflector#method-i-demodulize). See also `deconstantize`. exclude?(string) Show source ``` # File activesupport/lib/active_support/core_ext/string/exclude.rb, line 10 def exclude?(string) !include?(string) end ``` The inverse of `String#include?`. Returns true if the string does not include the other string. ``` "hello".exclude? "lo" # => false "hello".exclude? "ol" # => true "hello".exclude? ?h # => false ``` first(limit = 1) Show source ``` # File activesupport/lib/active_support/core_ext/string/access.rb, line 78 def first(limit = 1) self[0, limit] || raise(ArgumentError, "negative limit") end ``` Returns the first character. If a limit is supplied, returns a substring from the beginning of the string until it reaches the limit value. If the given limit is greater than or equal to the string length, returns a copy of self. ``` str = "hello" str.first # => "h" str.first(1) # => "h" str.first(2) # => "he" str.first(0) # => "" str.first(6) # => "hello" ``` foreign\_key(separate\_class\_name\_and\_id\_with\_underscore = true) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 290 def foreign_key(separate_class_name_and_id_with_underscore = true) ActiveSupport::Inflector.foreign_key(self, separate_class_name_and_id_with_underscore) end ``` Creates a foreign key name from a class name. `separate_class_name_and_id_with_underscore` sets whether the method should put '\_' between the name and 'id'. ``` 'Message'.foreign_key # => "message_id" 'Message'.foreign_key(false) # => "messageid" 'Admin::Post'.foreign_key # => "post_id" ``` See [`ActiveSupport::Inflector.foreign_key`](activesupport/inflector#method-i-foreign_key). from(position) Show source ``` # File activesupport/lib/active_support/core_ext/string/access.rb, line 46 def from(position) self[position, length] end ``` Returns a substring from the given position to the end of the string. If the position is negative, it is counted from the end of the string. ``` str = "hello" str.from(0) # => "hello" str.from(3) # => "lo" str.from(-2) # => "lo" ``` You can mix it with `to` method and do fun things like: ``` str = "hello" str.from(0).to(-1) # => "hello" str.from(1).to(-2) # => "ell" ``` html\_safe() Show source ``` # File activesupport/lib/active_support/core_ext/string/output_safety.rb, line 336 def html_safe ActiveSupport::SafeBuffer.new(self) end ``` Marks a string as trusted safe. It will be inserted into HTML with no additional escaping performed. It is your responsibility to ensure that the string contains no malicious content. This method is equivalent to the `raw` helper in views. It is recommended that you use `sanitize` instead of this method. It should never be called on user input. humanize(capitalize: true, keep\_id\_suffix: false) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 266 def humanize(capitalize: true, keep_id_suffix: false) ActiveSupport::Inflector.humanize(self, capitalize: capitalize, keep_id_suffix: keep_id_suffix) end ``` Capitalizes the first word, turns underscores into spaces, and (by default)strips a trailing '\_id' if present. Like `titleize`, this is meant for creating pretty output. The capitalization of the first word can be turned off by setting the optional parameter `capitalize` to false. By default, this parameter is true. The trailing '\_id' can be kept and capitalized by setting the optional parameter `keep_id_suffix` to true. By default, this parameter is false. ``` 'employee_salary'.humanize # => "Employee salary" 'author_id'.humanize # => "Author" 'author_id'.humanize(capitalize: false) # => "author" '_id'.humanize # => "Id" 'author_id'.humanize(keep_id_suffix: true) # => "Author id" ``` See [`ActiveSupport::Inflector.humanize`](activesupport/inflector#method-i-humanize). in\_time\_zone(zone = ::Time.zone) Show source ``` # File activesupport/lib/active_support/core_ext/string/zones.rb, line 9 def in_time_zone(zone = ::Time.zone) if zone ::Time.find_zone!(zone).parse(self) else to_time end end ``` Converts [`String`](string) to a TimeWithZone in the current zone if [`Time.zone`](time#method-c-zone) or [`Time.zone_default`](time#attribute-c-zone_default) is set, otherwise converts [`String`](string) to a [`Time`](time) via [`String#to_time`](string#method-i-to_time) indent(amount, indent\_string = nil, indent\_empty\_lines = false) Show source ``` # File activesupport/lib/active_support/core_ext/string/indent.rb, line 42 def indent(amount, indent_string = nil, indent_empty_lines = false) dup.tap { |_| _.indent!(amount, indent_string, indent_empty_lines) } end ``` Indents the lines in the receiver: ``` <<EOS.indent(2) def some_method some_code end EOS # => def some_method some_code end ``` The second argument, `indent_string`, specifies which indent string to use. The default is `nil`, which tells the method to make a guess by peeking at the first indented line, and fallback to a space if there is none. ``` " foo".indent(2) # => " foo" "foo\n\t\tbar".indent(2) # => "\t\tfoo\n\t\t\t\tbar" "foo".indent(2, "\t") # => "\t\tfoo" ``` While `indent_string` is typically one space or tab, it may be any string. The third argument, `indent_empty_lines`, is a flag that says whether empty lines should be indented. Default is false. ``` "foo\n\nbar".indent(2) # => " foo\n\n bar" "foo\n\nbar".indent(2, nil, true) # => " foo\n \n bar" ``` indent!(amount, indent\_string = nil, indent\_empty\_lines = false) Show source ``` # File activesupport/lib/active_support/core_ext/string/indent.rb, line 7 def indent!(amount, indent_string = nil, indent_empty_lines = false) indent_string = indent_string || self[/^[ \t]/] || " " re = indent_empty_lines ? /^/ : /^(?!$)/ gsub!(re, indent_string * amount) end ``` Same as `indent`, except it indents the receiver in-place. Returns the indented string, or `nil` if there was nothing to indent. inquiry() Show source ``` # File activesupport/lib/active_support/core_ext/string/inquiry.rb, line 13 def inquiry ActiveSupport::StringInquirer.new(self) end ``` Wraps the current string in the `ActiveSupport::StringInquirer` class, which gives you a prettier way to test for equality. ``` env = 'production'.inquiry env.production? # => true env.development? # => false ``` is\_utf8?() Show source ``` # File activesupport/lib/active_support/core_ext/string/multibyte.rb, line 48 def is_utf8? case encoding when Encoding::UTF_8, Encoding::US_ASCII valid_encoding? when Encoding::ASCII_8BIT dup.force_encoding(Encoding::UTF_8).valid_encoding? else false end end ``` Returns `true` if string has utf\_8 encoding. ``` utf_8_str = "some string".encode "UTF-8" iso_str = "some string".encode "ISO-8859-1" utf_8_str.is_utf8? # => true iso_str.is_utf8? # => false ``` last(limit = 1) Show source ``` # File activesupport/lib/active_support/core_ext/string/access.rb, line 92 def last(limit = 1) self[[length - limit, 0].max, limit] || raise(ArgumentError, "negative limit") end ``` Returns the last character of the string. If a limit is supplied, returns a substring from the end of the string until it reaches the limit value (counting backwards). If the given limit is greater than or equal to the string length, returns a copy of self. ``` str = "hello" str.last # => "o" str.last(1) # => "o" str.last(2) # => "lo" str.last(0) # => "" str.last(6) # => "hello" ``` mb\_chars() Show source ``` # File activesupport/lib/active_support/core_ext/string/multibyte.rb, line 37 def mb_chars ActiveSupport::Multibyte.proxy_class.new(self) end ``` Multibyte proxy --------------- `mb_chars` is a multibyte safe proxy for string methods. It creates and returns an instance of the [`ActiveSupport::Multibyte::Chars`](activesupport/multibyte/chars) class which encapsulates the original string. A Unicode safe version of all the [`String`](string) methods are defined on this proxy class. If the proxy class doesn't respond to a certain method, it's forwarded to the encapsulated string. ``` >> "lj".mb_chars.upcase.to_s => "LJ" ``` NOTE: Ruby 2.4 and later support native Unicode case mappings: ``` >> "lj".upcase => "LJ" ``` [`Method`](method) chaining ---------------------------- All the methods on the Chars proxy which normally return a string will return a Chars object. This allows method chaining on the result of any of these methods. ``` name.mb_chars.reverse.length # => 12 ``` Interoperability and configuration ---------------------------------- The Chars object tries to be as interchangeable with [`String`](string) objects as possible: sorting and comparing between [`String`](string) and Char work like expected. The bang! methods change the internal string representation in the Chars object. Interoperability problems can be resolved easily with a `to_s` call. For more information about the methods defined on the Chars proxy see [`ActiveSupport::Multibyte::Chars`](activesupport/multibyte/chars). For information about how to change the default Multibyte behavior see [`ActiveSupport::Multibyte`](activesupport/multibyte). parameterize(separator: "-", preserve\_case: false, locale: nil) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 219 def parameterize(separator: "-", preserve_case: false, locale: nil) ActiveSupport::Inflector.parameterize(self, separator: separator, preserve_case: preserve_case, locale: locale) end ``` Replaces special characters in a string so that it may be used as part of a 'pretty' URL. If the optional parameter `locale` is specified, the word will be parameterized as a word of that language. By default, this parameter is set to `nil` and it will use the configured `I18n.locale`. ``` class Person def to_param "#{id}-#{name.parameterize}" end end @person = Person.find(1) # => #<Person id: 1, name: "Donald E. Knuth"> <%= link_to(@person.name, person_path) %> # => <a href="/person/1-donald-e-knuth">Donald E. Knuth</a> ``` To preserve the case of the characters in a string, use the `preserve_case` argument. ``` class Person def to_param "#{id}-#{name.parameterize(preserve_case: true)}" end end @person = Person.find(1) # => #<Person id: 1, name: "Donald E. Knuth"> <%= link_to(@person.name, person_path) %> # => <a href="/person/1-Donald-E-Knuth">Donald E. Knuth</a> ``` See [`ActiveSupport::Inflector.parameterize`](activesupport/inflector#method-i-parameterize). pluralize(count = nil, locale = :en) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 35 def pluralize(count = nil, locale = :en) locale = count if count.is_a?(Symbol) if count == 1 dup else ActiveSupport::Inflector.pluralize(self, locale) end end ``` Returns the plural form of the word in the string. If the optional parameter `count` is specified, the singular form will be returned if `count == 1`. For any other value of `count` the plural will be returned. If the optional parameter `locale` is specified, the word will be pluralized as a word of that language. By default, this parameter is set to `:en`. You must define your own inflection rules for languages other than English. ``` 'post'.pluralize # => "posts" 'octopus'.pluralize # => "octopi" 'sheep'.pluralize # => "sheep" 'words'.pluralize # => "words" 'the blue mailman'.pluralize # => "the blue mailmen" 'CamelOctopus'.pluralize # => "CamelOctopi" 'apple'.pluralize(1) # => "apple" 'apple'.pluralize(2) # => "apples" 'ley'.pluralize(:es) # => "leyes" 'ley'.pluralize(1, :es) # => "ley" ``` See [`ActiveSupport::Inflector.pluralize`](activesupport/inflector#method-i-pluralize). remove(\*patterns) Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 32 def remove(*patterns) dup.remove!(*patterns) end ``` Returns a new string with all occurrences of the patterns removed. ``` str = "foo bar test" str.remove(" test") # => "foo bar" str.remove(" test", /bar/) # => "foo " str # => "foo bar test" ``` remove!(\*patterns) Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 40 def remove!(*patterns) patterns.each do |pattern| gsub! pattern, "" end self end ``` Alters the string by removing all occurrences of the patterns. ``` str = "foo bar test" str.remove!(" test", /bar/) # => "foo " str # => "foo " ``` safe\_constantize() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 86 def safe_constantize ActiveSupport::Inflector.safe_constantize(self) end ``` `safe_constantize` tries to find a declared constant with the name specified in the string. It returns `nil` when the name is not in CamelCase or is not initialized. ``` 'Module'.safe_constantize # => Module 'Class'.safe_constantize # => Class 'blargle'.safe_constantize # => nil ``` See [`ActiveSupport::Inflector.safe_constantize`](activesupport/inflector#method-i-safe_constantize). singularize(locale = :en) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 60 def singularize(locale = :en) ActiveSupport::Inflector.singularize(self, locale) end ``` The reverse of `pluralize`, returns the singular form of a word in a string. If the optional parameter `locale` is specified, the word will be singularized as a word of that language. By default, this parameter is set to `:en`. You must define your own inflection rules for languages other than English. ``` 'posts'.singularize # => "post" 'octopi'.singularize # => "octopus" 'sheep'.singularize # => "sheep" 'word'.singularize # => "word" 'the blue mailmen'.singularize # => "the blue mailman" 'CamelOctopi'.singularize # => "CamelOctopus" 'leyes'.singularize(:es) # => "ley" ``` See [`ActiveSupport::Inflector.singularize`](activesupport/inflector#method-i-singularize). squish() Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 13 def squish dup.squish! end ``` Returns the string, first removing all whitespace on both ends of the string, and then changing remaining consecutive whitespace groups into one space each. Note that it handles both ASCII and Unicode whitespace. ``` %{ Multi-line string }.squish # => "Multi-line string" " foo bar \n \t boo".squish # => "foo bar boo" ``` squish!() Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 21 def squish! gsub!(/[[:space:]]+/, " ") strip! self end ``` Performs a destructive squish. See [`String#squish`](string#method-i-squish). ``` str = " foo bar \n \t boo" str.squish! # => "foo bar boo" str # => "foo bar boo" ``` strip\_heredoc() Show source ``` # File activesupport/lib/active_support/core_ext/string/strip.rb, line 22 def strip_heredoc gsub(/^#{scan(/^[ \t]*(?=\S)/).min}/, "").tap do |stripped| stripped.freeze if frozen? end end ``` Strips indentation in heredocs. For example in ``` if options[:usage] puts <<-USAGE.strip_heredoc This command does such and such. Supported options are: -h This message ... USAGE end ``` the user would see the usage message aligned against the left margin. Technically, it looks for the least indented non-empty line in the whole string, and removes that amount of leading whitespace. tableize() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 231 def tableize ActiveSupport::Inflector.tableize(self) end ``` Creates the name of a table like Rails does for models to table names. This method uses the `pluralize` method on the last word in the string. ``` 'RawScaledScorer'.tableize # => "raw_scaled_scorers" 'ham_and_egg'.tableize # => "ham_and_eggs" 'fancyCategory'.tableize # => "fancy_categories" ``` See [`ActiveSupport::Inflector.tableize`](activesupport/inflector#method-i-tableize). titlecase(keep\_id\_suffix: false) Alias for: [titleize](string#method-i-titleize) titleize(keep\_id\_suffix: false) Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 130 def titleize(keep_id_suffix: false) ActiveSupport::Inflector.titleize(self, keep_id_suffix: keep_id_suffix) end ``` Capitalizes all the words and replaces some characters in the string to create a nicer looking title. `titleize` is meant for creating pretty output. It is not used in the Rails internals. The trailing '\_id','Id'.. can be kept and capitalized by setting the optional parameter `keep_id_suffix` to true. By default, this parameter is false. ``` 'man from the boondocks'.titleize # => "Man From The Boondocks" 'x-men: the last stand'.titleize # => "X Men: The Last Stand" 'string_ending_with_id'.titleize(keep_id_suffix: true) # => "String Ending With Id" ``` `titleize` is also aliased as `titlecase`. See [`ActiveSupport::Inflector.titleize`](activesupport/inflector#method-i-titleize). Also aliased as: [titlecase](string#method-i-titlecase) to(position) Show source ``` # File activesupport/lib/active_support/core_ext/string/access.rb, line 63 def to(position) position += size if position < 0 self[0, position + 1] || +"" end ``` Returns a substring from the beginning of the string to the given position. If the position is negative, it is counted from the end of the string. ``` str = "hello" str.to(0) # => "h" str.to(3) # => "hell" str.to(-2) # => "hell" ``` You can mix it with `from` method and do fun things like: ``` str = "hello" str.from(0).to(-1) # => "hello" str.from(1).to(-2) # => "ell" ``` to\_date() Show source ``` # File activesupport/lib/active_support/core_ext/string/conversions.rb, line 47 def to_date ::Date.parse(self, false) unless blank? end ``` Converts a string to a [`Date`](date) value. ``` "1-1-2012".to_date # => Sun, 01 Jan 2012 "01/01/2012".to_date # => Sun, 01 Jan 2012 "2012-12-13".to_date # => Thu, 13 Dec 2012 "12/13/2012".to_date # => ArgumentError: invalid date ``` to\_datetime() Show source ``` # File activesupport/lib/active_support/core_ext/string/conversions.rb, line 57 def to_datetime ::DateTime.parse(self, false) unless blank? end ``` Converts a string to a [`DateTime`](datetime) value. ``` "1-1-2012".to_datetime # => Sun, 01 Jan 2012 00:00:00 +0000 "01/01/2012 23:59:59".to_datetime # => Sun, 01 Jan 2012 23:59:59 +0000 "2012-12-13 12:50".to_datetime # => Thu, 13 Dec 2012 12:50:00 +0000 "12/13/2012".to_datetime # => ArgumentError: invalid date ``` to\_time(form = :local) Show source ``` # File activesupport/lib/active_support/core_ext/string/conversions.rb, line 22 def to_time(form = :local) parts = Date._parse(self, false) used_keys = %i(year mon mday hour min sec sec_fraction offset) return if (parts.keys & used_keys).empty? now = Time.now time = Time.new( parts.fetch(:year, now.year), parts.fetch(:mon, now.month), parts.fetch(:mday, now.day), parts.fetch(:hour, 0), parts.fetch(:min, 0), parts.fetch(:sec, 0) + parts.fetch(:sec_fraction, 0), parts.fetch(:offset, form == :utc ? 0 : nil) ) form == :utc ? time.utc : time.to_time end ``` Converts a string to a [`Time`](time) value. The `form` can be either :utc or :local (default :local). The time is parsed using Time.parse method. If `form` is :local, then the time is in the system timezone. If the date part is missing then the current date is used and if the time part is missing then it is assumed to be 00:00:00. ``` "13-12-2012".to_time # => 2012-12-13 00:00:00 +0100 "06:12".to_time # => 2012-12-13 06:12:00 +0100 "2012-12-13 06:12".to_time # => 2012-12-13 06:12:00 +0100 "2012-12-13T06:12".to_time # => 2012-12-13 06:12:00 +0100 "2012-12-13T06:12".to_time(:utc) # => 2012-12-13 06:12:00 UTC "12/13/2012".to_time # => ArgumentError: argument out of range "1604326192".to_time # => ArgumentError: argument out of range ``` truncate(truncate\_at, options = {}) Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 66 def truncate(truncate_at, options = {}) return dup unless length > truncate_at omission = options[:omission] || "..." length_with_room_for_omission = truncate_at - omission.length stop = \ if options[:separator] rindex(options[:separator], length_with_room_for_omission) || length_with_room_for_omission else length_with_room_for_omission end +"#{self[0, stop]}#{omission}" end ``` Truncates a given `text` after a given `length` if `text` is longer than `length`: ``` 'Once upon a time in a world far far away'.truncate(27) # => "Once upon a time in a wo..." ``` Pass a string or regexp `:separator` to truncate `text` at a natural break: ``` 'Once upon a time in a world far far away'.truncate(27, separator: ' ') # => "Once upon a time in a..." 'Once upon a time in a world far far away'.truncate(27, separator: /\s/) # => "Once upon a time in a..." ``` The last characters will be replaced with the `:omission` string (defaults to “…”) for a total length not exceeding `length`: ``` 'And they found that many people were sleeping better.'.truncate(25, omission: '... (continued)') # => "And they f... (continued)" ``` truncate\_bytes(truncate\_at, omission: "…") Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 95 def truncate_bytes(truncate_at, omission: "…") omission ||= "" case when bytesize <= truncate_at dup when omission.bytesize > truncate_at raise ArgumentError, "Omission #{omission.inspect} is #{omission.bytesize}, larger than the truncation length of #{truncate_at} bytes" when omission.bytesize == truncate_at omission.dup else self.class.new.tap do |cut| cut_at = truncate_at - omission.bytesize each_grapheme_cluster do |grapheme| if cut.bytesize + grapheme.bytesize <= cut_at cut << grapheme else break end end cut << omission end end end ``` Truncates `text` to at most `bytesize` bytes in length without breaking string encoding by splitting multibyte characters or breaking grapheme clusters (“perceptual characters”) by truncating at combining characters. ``` >> "🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪".size => 20 >> "🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪".bytesize => 80 >> "🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪🔪".truncate_bytes(20) => "🔪🔪🔪🔪…" ``` The truncated text ends with the `:omission` string, defaulting to “…”, for a total length not exceeding `bytesize`. truncate\_words(words\_count, options = {}) Show source ``` # File activesupport/lib/active_support/core_ext/string/filters.rb, line 136 def truncate_words(words_count, options = {}) sep = options[:separator] || /\s+/ sep = Regexp.escape(sep.to_s) unless Regexp === sep if self =~ /\A((?>.+?#{sep}){#{words_count - 1}}.+?)#{sep}.*/m $1 + (options[:omission] || "...") else dup end end ``` Truncates a given `text` after a given number of words (`words_count`): ``` 'Once upon a time in a world far far away'.truncate_words(4) # => "Once upon a time..." ``` Pass a string or regexp `:separator` to specify a different separator of words: ``` 'Once<br>upon<br>a<br>time<br>in<br>a<br>world'.truncate_words(5, separator: '<br>') # => "Once<br>upon<br>a<br>time<br>in..." ``` The last characters will be replaced with the `:omission` string (defaults to “…”): ``` 'And they found that many people were sleeping better.'.truncate_words(5, omission: '... (continued)') # => "And they found that many... (continued)" ``` underscore() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 143 def underscore ActiveSupport::Inflector.underscore(self) end ``` The reverse of `camelize`. Makes an underscored, lowercase form from the expression in the string. `underscore` will also change '::' to '/' to convert namespaces to paths. ``` 'ActiveModel'.underscore # => "active_model" 'ActiveModel::Errors'.underscore # => "active_model/errors" ``` See [`ActiveSupport::Inflector.underscore`](activesupport/inflector#method-i-underscore). upcase\_first() Show source ``` # File activesupport/lib/active_support/core_ext/string/inflections.rb, line 277 def upcase_first ActiveSupport::Inflector.upcase_first(self) end ``` Converts just the first character to uppercase. ``` 'what a Lovely Day'.upcase_first # => "What a Lovely Day" 'w'.upcase_first # => "W" ''.upcase_first # => "" ``` See [`ActiveSupport::Inflector.upcase_first`](activesupport/inflector#method-i-upcase_first).
programming_docs
rails class FalseClass class FalseClass ================= Parent: [Object](object) blank?() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 67 def blank? true end ``` `false` is blank: ``` false.blank? # => true ``` @return [true] to\_param() Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 34 def to_param self end ``` Returns `self`. rails module Mime module Mime ============ ALL [`ALL`](mime#ALL) isn't a real MIME type, so we don't register it for lookup with the other concrete types. It's a wildcard match that we use for `respond_to` negotiation internals. EXTENSION\_LOOKUP LOOKUP SET [](type) Show source ``` # File actionpack/lib/action_dispatch/http/mime_type.rb, line 40 def [](type) return type if type.is_a?(Type) Type.lookup_by_extension(type) end ``` fetch(type, &block) Show source ``` # File actionpack/lib/action_dispatch/http/mime_type.rb, line 45 def fetch(type, &block) return type if type.is_a?(Type) EXTENSION_LOOKUP.fetch(type.to_s, &block) end ``` rails module ActionCable module ActionCable =================== gem\_version() Show source ``` # File actioncable/lib/action_cable/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Action Cable as a `Gem::Version`. version() Show source ``` # File actioncable/lib/action_cable/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded Action Cable as a `Gem::Version` rails module Benchmark module Benchmark ================= ms(&block) Show source ``` # File activesupport/lib/active_support/core_ext/benchmark.rb, line 13 def ms(&block) 1000 * realtime(&block) end ``` [`Benchmark`](benchmark) realtime in milliseconds. ``` Benchmark.realtime { User.all } # => 8.0e-05 Benchmark.ms { User.all } # => 0.074 ``` rails class Pathname class Pathname =============== Parent: [Object](object) existence() Show source ``` # File activesupport/lib/active_support/core_ext/pathname/existence.rb, line 18 def existence self if exist? end ``` Returns the receiver if the named file exists otherwise returns `nil`. `pathname.existence` is equivalent to ``` pathname.exist? ? pathname : nil ``` For example, something like ``` content = pathname.read if pathname.exist? ``` becomes ``` content = pathname.existence&.read ``` @return [Pathname] rails class File class File =========== Parent: [Object](object) atomic\_write(file\_name, temp\_dir = dirname(file\_name)) { |temp\_file| ... } Show source ``` # File activesupport/lib/active_support/core_ext/file/atomic.rb, line 21 def self.atomic_write(file_name, temp_dir = dirname(file_name)) require "tempfile" unless defined?(Tempfile) Tempfile.open(".#{basename(file_name)}", temp_dir) do |temp_file| temp_file.binmode return_val = yield temp_file temp_file.close old_stat = if exist?(file_name) # Get original file permissions stat(file_name) else # If not possible, probe which are the default permissions in the # destination directory. probe_stat_in(dirname(file_name)) end if old_stat # Set correct permissions on new file begin chown(old_stat.uid, old_stat.gid, temp_file.path) # This operation will affect filesystem ACL's chmod(old_stat.mode, temp_file.path) rescue Errno::EPERM, Errno::EACCES # Changing file ownership failed, moving on. end end # Overwrite original file with temp file rename(temp_file.path, file_name) return_val end end ``` Write to a file atomically. Useful for situations where you don't want other processes or threads to see half-written files. ``` File.atomic_write('important.file') do |file| file.write('hello') end ``` This method needs to create a temporary file. By default it will create it in the same directory as the destination file. If you don't like this behavior you can provide a different directory but it must be on the same physical filesystem as the file you're trying to write. ``` File.atomic_write('/data/something.important', '/data/tmp') do |file| file.write('hello') end ``` rails module ActiveStorage module ActiveStorage ===================== gem\_version() Show source ``` # File activestorage/lib/active_storage/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Active Storage as a `Gem::Version`. version() Show source ``` # File activestorage/lib/active_storage/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded ActiveStorage as a `Gem::Version` rails module Kernel module Kernel ============== class\_eval(\*args, &block) Show source ``` # File activesupport/lib/active_support/core_ext/kernel/singleton_class.rb, line 5 def class_eval(*args, &block) singleton_class.class_eval(*args, &block) end ``` [`class_eval`](kernel#method-i-class_eval) on an object acts like singleton\_class.class\_eval. concern(topic, &module\_definition) Show source ``` # File activesupport/lib/active_support/core_ext/kernel/concern.rb, line 11 def concern(topic, &module_definition) Object.concern topic, &module_definition end ``` A shortcut to define a toplevel concern, not within a module. See [`Module::Concerning`](module/concerning) for more. enable\_warnings(&block) Show source ``` # File activesupport/lib/active_support/core_ext/kernel/reporting.rb, line 20 def enable_warnings(&block) with_warnings(true, &block) end ``` Sets $VERBOSE to `true` for the duration of the block and back to its original value afterwards. silence\_warnings(&block) Show source ``` # File activesupport/lib/active_support/core_ext/kernel/reporting.rb, line 14 def silence_warnings(&block) with_warnings(nil, &block) end ``` Sets $VERBOSE to `nil` for the duration of the block and back to its original value afterwards. ``` silence_warnings do value = noisy_call # no warning voiced end noisy_call # warning voiced ``` suppress(\*exception\_classes) { || ... } Show source ``` # File activesupport/lib/active_support/core_ext/kernel/reporting.rb, line 41 def suppress(*exception_classes) yield rescue *exception_classes end ``` Blocks and ignores any exception passed as argument if raised within the block. ``` suppress(ZeroDivisionError) do 1/0 puts 'This code is NOT reached' end puts 'This code gets executed and nothing related to ZeroDivisionError was seen' ``` with\_warnings(flag) { || ... } Show source ``` # File activesupport/lib/active_support/core_ext/kernel/reporting.rb, line 26 def with_warnings(flag) old_verbose, $VERBOSE = $VERBOSE, flag yield ensure $VERBOSE = old_verbose end ``` Sets $VERBOSE for the duration of the block and back to its original value afterwards. rails class Array class Array ============ Parent: [Object](object) wrap(object) Show source ``` # File activesupport/lib/active_support/core_ext/array/wrap.rb, line 39 def self.wrap(object) if object.nil? [] elsif object.respond_to?(:to_ary) object.to_ary || [object] else [object] end end ``` Wraps its argument in an array unless it is already an array (or array-like). Specifically: * If the argument is `nil` an empty array is returned. * Otherwise, if the argument responds to `to_ary` it is invoked, and its result returned. * Otherwise, returns an array with the argument as its single element. ``` Array.wrap(nil) # => [] Array.wrap([1, 2, 3]) # => [1, 2, 3] Array.wrap(0) # => [0] ``` This method is similar in purpose to `Kernel#Array`, but there are some differences: * If the argument responds to `to_ary` the method is invoked. `Kernel#Array` moves on to try `to_a` if the returned value is `nil`, but `Array.wrap` returns an array with the argument as its single element right away. * If the returned value from `to_ary` is neither `nil` nor an `Array` object, `Kernel#Array` raises an exception, while `Array.wrap` does not, it just returns the value. * It does not call `to_a` on the argument, if the argument does not respond to `to_ary` it returns an array with the argument as its single element. The last point is easily explained with some enumerables: ``` Array(foo: :bar) # => [[:foo, :bar]] Array.wrap(foo: :bar) # => [{:foo=>:bar}] ``` There's also a related idiom that uses the splat operator: ``` [*object] ``` which returns `[]` for `nil`, but calls to `Array(object)` otherwise. The differences with `Kernel#Array` explained above apply to the rest of `object`s. deep\_dup() Show source ``` # File activesupport/lib/active_support/core_ext/object/deep_dup.rb, line 29 def deep_dup map(&:deep_dup) end ``` Returns a deep copy of array. ``` array = [1, [2, 3]] dup = array.deep_dup dup[1][2] = 4 array[1][2] # => nil dup[1][2] # => 4 ``` excluding(\*elements) Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 47 def excluding(*elements) self - elements.flatten(1) end ``` Returns a copy of the [`Array`](array) excluding the specified elements. ``` ["David", "Rafael", "Aaron", "Todd"].excluding("Aaron", "Todd") # => ["David", "Rafael"] [ [ 0, 1 ], [ 1, 0 ] ].excluding([ [ 1, 0 ] ]) # => [ [ 0, 1 ] ] ``` Note: This is an optimization of `Enumerable#excluding` that uses `Array#-` instead of `Array#reject` for performance reasons. Also aliased as: [without](array#method-i-without) extract!() { |element| ... } Show source ``` # File activesupport/lib/active_support/core_ext/array/extract.rb, line 10 def extract! return to_enum(:extract!) { size } unless block_given? extracted_elements = [] reject! do |element| extracted_elements << element if yield(element) end extracted_elements end ``` Removes and returns the elements for which the block returns a true value. If no block is given, an Enumerator is returned instead. ``` numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] odd_numbers = numbers.extract! { |number| number.odd? } # => [1, 3, 5, 7, 9] numbers # => [0, 2, 4, 6, 8] ``` extract\_options!() Show source ``` # File activesupport/lib/active_support/core_ext/array/extract_options.rb, line 24 def extract_options! if last.is_a?(Hash) && last.extractable_options? pop else {} end end ``` Extracts options from a set of arguments. Removes and returns the last element in the array if it's a hash, otherwise returns a blank hash. ``` def options(*args) args.extract_options! end options(1, 2) # => {} options(1, 2, a: :b) # => {:a=>:b} ``` fifth() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 76 def fifth self[4] end ``` Equal to `self[4]`. ``` %w( a b c d e ).fifth # => "e" ``` forty\_two() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 83 def forty_two self[41] end ``` Equal to `self[41]`. Also known as accessing “the reddit”. ``` (1..42).to_a.forty_two # => 42 ``` fourth() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 69 def fourth self[3] end ``` Equal to `self[3]`. ``` %w( a b c d e ).fourth # => "d" ``` from(position) Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 12 def from(position) self[position, length] || [] end ``` Returns the tail of the array from `position`. ``` %w( a b c d ).from(0) # => ["a", "b", "c", "d"] %w( a b c d ).from(2) # => ["c", "d"] %w( a b c d ).from(10) # => [] %w().from(0) # => [] %w( a b c d ).from(-2) # => ["c", "d"] %w( a b c ).from(-10) # => [] ``` in\_groups(number, fill\_with = nil, &block) Show source ``` # File activesupport/lib/active_support/core_ext/array/grouping.rb, line 62 def in_groups(number, fill_with = nil, &block) # size.div number gives minor group size; # size % number gives how many objects need extra accommodation; # each group hold either division or division + 1 items. division = size.div number modulo = size % number # create a new array avoiding dup groups = [] start = 0 number.times do |index| length = division + (modulo > 0 && modulo > index ? 1 : 0) groups << last_group = slice(start, length) last_group << fill_with if fill_with != false && modulo > 0 && length == division start += length end if block_given? groups.each(&block) else groups end end ``` Splits or iterates over the array in `number` of groups, padding any remaining slots with `fill_with` unless it is `false`. ``` %w(1 2 3 4 5 6 7 8 9 10).in_groups(3) {|group| p group} ["1", "2", "3", "4"] ["5", "6", "7", nil] ["8", "9", "10", nil] %w(1 2 3 4 5 6 7 8 9 10).in_groups(3, '&nbsp;') {|group| p group} ["1", "2", "3", "4"] ["5", "6", "7", "&nbsp;"] ["8", "9", "10", "&nbsp;"] %w(1 2 3 4 5 6 7).in_groups(3, false) {|group| p group} ["1", "2", "3"] ["4", "5"] ["6", "7"] ``` in\_groups\_of(number, fill\_with = nil, &block) Show source ``` # File activesupport/lib/active_support/core_ext/array/grouping.rb, line 22 def in_groups_of(number, fill_with = nil, &block) if number.to_i <= 0 raise ArgumentError, "Group size must be a positive integer, was #{number.inspect}" end if fill_with == false collection = self else # size % number gives how many extra we have; # subtracting from number gives how many to add; # modulo number ensures we don't add group of just fill. padding = (number - size % number) % number collection = dup.concat(Array.new(padding, fill_with)) end if block_given? collection.each_slice(number, &block) else collection.each_slice(number).to_a end end ``` Splits or iterates over the array in groups of size `number`, padding any remaining slots with `fill_with` unless it is `false`. ``` %w(1 2 3 4 5 6 7 8 9 10).in_groups_of(3) {|group| p group} ["1", "2", "3"] ["4", "5", "6"] ["7", "8", "9"] ["10", nil, nil] %w(1 2 3 4 5).in_groups_of(2, '&nbsp;') {|group| p group} ["1", "2"] ["3", "4"] ["5", "&nbsp;"] %w(1 2 3 4 5).in_groups_of(2, false) {|group| p group} ["1", "2"] ["3", "4"] ["5"] ``` including(\*elements) Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 36 def including(*elements) self + elements.flatten(1) end ``` Returns a new array that includes the passed elements. ``` [ 1, 2, 3 ].including(4, 5) # => [ 1, 2, 3, 4, 5 ] [ [ 0, 1 ] ].including([ [ 1, 0 ] ]) # => [ [ 0, 1 ], [ 1, 0 ] ] ``` inquiry() Show source ``` # File activesupport/lib/active_support/core_ext/array/inquiry.rb, line 16 def inquiry ActiveSupport::ArrayInquirer.new(self) end ``` Wraps the array in an `ArrayInquirer` object, which gives a friendlier way to check its string-like contents. ``` pets = [:cat, :dog].inquiry pets.cat? # => true pets.ferret? # => false pets.any?(:cat, :ferret) # => true pets.any?(:ferret, :alligator) # => false ``` second() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 55 def second self[1] end ``` Equal to `self[1]`. ``` %w( a b c d e ).second # => "b" ``` second\_to\_last() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 97 def second_to_last self[-2] end ``` Equal to `self[-2]`. ``` %w( a b c d e ).second_to_last # => "d" ``` split(value = nil, &block) Show source ``` # File activesupport/lib/active_support/core_ext/array/grouping.rb, line 93 def split(value = nil, &block) arr = dup result = [] if block_given? while (idx = arr.index(&block)) result << arr.shift(idx) arr.shift end else while (idx = arr.index(value)) result << arr.shift(idx) arr.shift end end result << arr end ``` Divides the array into one or more subarrays based on a delimiting `value` or the result of an optional block. ``` [1, 2, 3, 4, 5].split(3) # => [[1, 2], [4, 5]] (1..10).to_a.split { |i| i % 3 == 0 } # => [[1, 2], [4, 5], [7, 8], [10]] ``` third() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 62 def third self[2] end ``` Equal to `self[2]`. ``` %w( a b c d e ).third # => "c" ``` third\_to\_last() Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 90 def third_to_last self[-3] end ``` Equal to `self[-3]`. ``` %w( a b c d e ).third_to_last # => "c" ``` to(position) Show source ``` # File activesupport/lib/active_support/core_ext/array/access.rb, line 24 def to(position) if position >= 0 take position + 1 else self[0..position] end end ``` Returns the beginning of the array up to `position`. ``` %w( a b c d ).to(0) # => ["a"] %w( a b c d ).to(2) # => ["a", "b", "c"] %w( a b c d ).to(10) # => ["a", "b", "c", "d"] %w().to(0) # => [] %w( a b c d ).to(-2) # => ["a", "b", "c"] %w( a b c ).to(-10) # => [] ``` to\_formatted\_s(format = :default) Show source ``` # File activesupport/lib/active_support/core_ext/array/conversions.rb, line 95 def to_formatted_s(format = :default) case format when :db if empty? "null" else collect(&:id).join(",") end else to_default_s end end ``` Extends `Array#to_s` to convert a collection of elements into a comma separated id list if `:db` argument is given as the format. This method is aliased to `to_fs`. ``` Blog.all.to_formatted_s(:db) # => "1,2,3" Blog.none.to_formatted_s(:db) # => "null" [1,2].to_formatted_s # => "[1, 2]" ``` Also aliased as: [to\_fs](array#method-i-to_fs) to\_fs(format = :default) Alias for: [to\_formatted\_s](array#method-i-to_formatted_s) to\_param() Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 42 def to_param collect(&:to_param).join "/" end ``` Calls `to_param` on all its elements and joins the result with slashes. This is used by `url_for` in Action Pack. to\_query(key) Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 50 def to_query(key) prefix = "#{key}[]" if empty? nil.to_query(prefix) else collect { |value| value.to_query(prefix) }.join "&" end end ``` Converts an array into a string suitable for use as a URL query string, using the given `key` as the param name. ``` ['Rails', 'coding'].to_query('hobbies') # => "hobbies%5B%5D=Rails&hobbies%5B%5D=coding" ``` to\_sentence(options = {}) Show source ``` # File activesupport/lib/active_support/core_ext/array/conversions.rb, line 61 def to_sentence(options = {}) options.assert_valid_keys(:words_connector, :two_words_connector, :last_word_connector, :locale) default_connectors = { words_connector: ", ", two_words_connector: " and ", last_word_connector: ", and " } if options[:locale] != false && defined?(I18n) i18n_connectors = I18n.translate(:'support.array', locale: options[:locale], default: {}) default_connectors.merge!(i18n_connectors) end options = default_connectors.merge!(options) case length when 0 +"" when 1 +"#{self[0]}" when 2 +"#{self[0]}#{options[:two_words_connector]}#{self[1]}" else +"#{self[0...-1].join(options[:words_connector])}#{options[:last_word_connector]}#{self[-1]}" end end ``` Converts the array to a comma-separated sentence where the last element is joined by the connector word. You can pass the following options to change the default behavior. If you pass an option key that doesn't exist in the list below, it will raise an `ArgumentError`. #### Options * `:words_connector` - The sign or word used to join all but the last element in arrays with three or more elements (default: “, ”). * `:last_word_connector` - The sign or word used to join the last element in arrays with three or more elements (default: “, and ”). * `:two_words_connector` - The sign or word used to join the elements in arrays with two elements (default: “ and ”). * `:locale` - If `i18n` is available, you can set a locale and use the connector options defined on the 'support.array' namespace in the corresponding dictionary file. #### Examples ``` [].to_sentence # => "" ['one'].to_sentence # => "one" ['one', 'two'].to_sentence # => "one and two" ['one', 'two', 'three'].to_sentence # => "one, two, and three" ['one', 'two'].to_sentence(passing: 'invalid option') # => ArgumentError: Unknown key: :passing. Valid keys are: :words_connector, :two_words_connector, :last_word_connector, :locale ['one', 'two'].to_sentence(two_words_connector: '-') # => "one-two" ['one', 'two', 'three'].to_sentence(words_connector: ' or ', last_word_connector: ' or at least ') # => "one or two or at least three" ``` Using `:locale` option: ``` # Given this locale dictionary: # # es: # support: # array: # words_connector: " o " # two_words_connector: " y " # last_word_connector: " o al menos " ['uno', 'dos'].to_sentence(locale: :es) # => "uno y dos" ['uno', 'dos', 'tres'].to_sentence(locale: :es) # => "uno o dos o al menos tres" ``` to\_xml(options = {}) { |builder| ... } Show source ``` # File activesupport/lib/active_support/core_ext/array/conversions.rb, line 185 def to_xml(options = {}) require "active_support/builder" unless defined?(Builder::XmlMarkup) options = options.dup options[:indent] ||= 2 options[:builder] ||= Builder::XmlMarkup.new(indent: options[:indent]) options[:root] ||= \ if first.class != Hash && all?(first.class) underscored = ActiveSupport::Inflector.underscore(first.class.name) ActiveSupport::Inflector.pluralize(underscored).tr("/", "_") else "objects" end builder = options[:builder] builder.instruct! unless options.delete(:skip_instruct) root = ActiveSupport::XmlMini.rename_key(options[:root].to_s, options) children = options.delete(:children) || root.singularize attributes = options[:skip_types] ? {} : { type: "array" } if empty? builder.tag!(root, attributes) else builder.tag!(root, attributes) do each { |value| ActiveSupport::XmlMini.to_tag(children, value, options) } yield builder if block_given? end end end ``` Returns a string that represents the array in XML by invoking `to_xml` on each element. Active Record collections delegate their representation in XML to this method. All elements are expected to respond to `to_xml`, if any of them does not then an exception is raised. The root node reflects the class name of the first element in plural if all elements belong to the same type and that's not Hash: ``` customer.projects.to_xml <?xml version="1.0" encoding="UTF-8"?> <projects type="array"> <project> <amount type="decimal">20000.0</amount> <customer-id type="integer">1567</customer-id> <deal-date type="date">2008-04-09</deal-date> ... </project> <project> <amount type="decimal">57230.0</amount> <customer-id type="integer">1567</customer-id> <deal-date type="date">2008-04-15</deal-date> ... </project> </projects> ``` Otherwise the root element is “objects”: ``` [{ foo: 1, bar: 2}, { baz: 3}].to_xml <?xml version="1.0" encoding="UTF-8"?> <objects type="array"> <object> <bar type="integer">2</bar> <foo type="integer">1</foo> </object> <object> <baz type="integer">3</baz> </object> </objects> ``` If the collection is empty the root element is “nil-classes” by default: ``` [].to_xml <?xml version="1.0" encoding="UTF-8"?> <nil-classes type="array"/> ``` To ensure a meaningful root element use the `:root` option: ``` customer_with_no_projects.projects.to_xml(root: 'projects') <?xml version="1.0" encoding="UTF-8"?> <projects type="array"/> ``` By default name of the node for the children of root is `root.singularize`. You can change it with the `:children` option. The `options` hash is passed downwards: ``` Message.all.to_xml(skip_types: true) <?xml version="1.0" encoding="UTF-8"?> <messages> <message> <created-at>2008-03-07T09:58:18+01:00</created-at> <id>1</id> <name>1</name> <updated-at>2008-03-07T09:58:18+01:00</updated-at> <user-id>1</user-id> </message> </messages> ``` without(\*elements) Alias for: [excluding](array#method-i-excluding)
programming_docs
rails class Regexp class Regexp ============= Parent: [Object](object) multiline?() Show source ``` # File activesupport/lib/active_support/core_ext/regexp.rb, line 11 def multiline? options & MULTILINE == MULTILINE end ``` Returns `true` if the regexp has the multiline flag set. ``` (/./).multiline? # => false (/./m).multiline? # => true Regexp.new(".").multiline? # => false Regexp.new(".", Regexp::MULTILINE).multiline? # => true ``` rails class LoadError class LoadError ================ Parent: [Object](object) is\_missing?(location) Show source ``` # File activesupport/lib/active_support/core_ext/load_error.rb, line 6 def is_missing?(location) location.delete_suffix(".rb") == path.to_s.delete_suffix(".rb") end ``` Returns true if the given path name (except perhaps for the “.rb” extension) is the missing file which caused the exception to be raised. rails module ActiveJob module ActiveJob ================= gem\_version() Show source ``` # File activejob/lib/active_job/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Active Job as a `Gem::Version` version() Show source ``` # File activejob/lib/active_job/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded Active Job as a `Gem::Version` rails module ActionMailbox module ActionMailbox ===================== gem\_version() Show source ``` # File actionmailbox/lib/action_mailbox/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the currently-loaded version of Action Mailbox as a `Gem::Version`. version() Show source ``` # File actionmailbox/lib/action_mailbox/version.rb, line 7 def self.version gem_version end ``` Returns the currently-loaded version of Action Mailbox as a `Gem::Version`. rails class Range class Range ============ Parent: [Object](object) overlaps?(other) Show source ``` # File activesupport/lib/active_support/core_ext/range/overlaps.rb, line 7 def overlaps?(other) cover?(other.first) || other.cover?(first) end ``` Compare two ranges and see if they overlap each other ``` (1..5).overlaps?(4..6) # => true (1..5).overlaps?(7..9) # => false ``` rails module Arel module Arel ============ VERSION sql(raw\_sql) Show source ``` # File activerecord/lib/arel.rb, line 38 def self.sql(raw_sql) Arel::Nodes::SqlLiteral.new raw_sql end ``` Wrap a known-safe SQL string for passing to query methods, e.g. ``` Post.order(Arel.sql("REPLACE(title, 'misc', 'zzzz') asc")).pluck(:id) ``` Great caution should be taken to avoid SQL injection vulnerabilities. This method should not be used with unsafe values such as request parameters or model attributes. rails module ActiveRecord module ActiveRecord ==================== MigrationProxy [`MigrationProxy`](activerecord#MigrationProxy) is used to defer loading of the actual migration classes until they are needed Point UnknownAttributeError Raised when unknown attributes are supplied via mass assignment. ``` class Person include ActiveModel::AttributeAssignment include ActiveModel::Validations end person = Person.new person.assign_attributes(name: 'Gorby') # => ActiveModel::UnknownAttributeError: unknown attribute 'name' for Person. ``` gem\_version() Show source ``` # File activerecord/lib/active_record/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Active Record as a `Gem::Version` version() Show source ``` # File activerecord/lib/active_record/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded [`ActiveRecord`](activerecord) as a `Gem::Version` rails class NilClass class NilClass =============== Parent: [Object](object) blank?() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 56 def blank? true end ``` `nil` is blank: ``` nil.blank? # => true ``` @return [true] to\_param() Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 20 def to_param self end ``` Returns `self`. try(\*) Show source ``` # File activesupport/lib/active_support/core_ext/object/try.rb, line 148 def try(*) nil end ``` Calling `try` on `nil` always returns `nil`. It becomes especially helpful when navigating through associations that may return `nil`. ``` nil.try(:name) # => nil ``` Without `try` ``` @person && @person.children.any? && @person.children.first.name ``` With `try` ``` @person.try(:children).try(:first).try(:name) ``` try!(\*) Show source ``` # File activesupport/lib/active_support/core_ext/object/try.rb, line 155 def try!(*) nil end ``` Calling `try!` on `nil` always returns `nil`. ``` nil.try!(:name) # => nil ``` rails module ActionView module ActionView ================== TemplateError gem\_version() Show source ``` # File actionview/lib/action_view/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Action View as a `Gem::Version` version() Show source ``` # File actionview/lib/action_view/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded [`ActionView`](actionview) as a `Gem::Version` rails class TrueClass class TrueClass ================ Parent: [Object](object) blank?() Show source ``` # File activesupport/lib/active_support/core_ext/object/blank.rb, line 78 def blank? false end ``` `true` is not blank: ``` true.blank? # => false ``` @return [false] to\_param() Show source ``` # File activesupport/lib/active_support/core_ext/object/to_query.rb, line 27 def to_param self end ``` Returns `self`. rails class DateTime class DateTime =============== Parent: [Object](object) Included modules: DateAndTime::Compatibility civil\_from\_format(utc\_or\_local, year, month = 1, day = 1, hour = 0, min = 0, sec = 0) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 69 def self.civil_from_format(utc_or_local, year, month = 1, day = 1, hour = 0, min = 0, sec = 0) if utc_or_local.to_sym == :local offset = ::Time.local(year, month, day).utc_offset.to_r / 86400 else offset = 0 end civil(year, month, day, hour, min, sec, offset) end ``` Returns [`DateTime`](datetime) with local offset for given year if format is local else offset is zero. ``` DateTime.civil_from_format :local, 2012 # => Sun, 01 Jan 2012 00:00:00 +0300 DateTime.civil_from_format :local, 2012, 12, 17 # => Mon, 17 Dec 2012 00:00:00 +0000 ``` current() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 10 def current ::Time.zone ? ::Time.zone.now.to_datetime : ::Time.now.to_datetime end ``` Returns `Time.zone.now.to_datetime` when `Time.zone` or `config.time_zone` are set, otherwise returns `Time.now.to_datetime`. <=>(other) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 204 def <=>(other) if other.respond_to? :to_datetime super other.to_datetime rescue nil else super end end ``` Layers additional behavior on DateTime#<=> so that [`Time`](time) and [`ActiveSupport::TimeWithZone`](activesupport/timewithzone) instances can be compared with a [`DateTime`](datetime). Calls superclass method acts\_like\_date?() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/acts_like.rb, line 8 def acts_like_date? true end ``` Duck-types as a Date-like class. See [`Object#acts_like?`](object#method-i-acts_like-3F). acts\_like\_time?() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/acts_like.rb, line 13 def acts_like_time? true end ``` Duck-types as a Time-like class. See [`Object#acts_like?`](object#method-i-acts_like-3F). advance(options) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 78 def advance(options) unless options[:weeks].nil? options[:weeks], partial_weeks = options[:weeks].divmod(1) options[:days] = options.fetch(:days, 0) + 7 * partial_weeks end unless options[:days].nil? options[:days], partial_days = options[:days].divmod(1) options[:hours] = options.fetch(:hours, 0) + 24 * partial_days end d = to_date.advance(options) datetime_advanced_by_date = change(year: d.year, month: d.month, day: d.day) seconds_to_advance = \ options.fetch(:seconds, 0) + options.fetch(:minutes, 0) * 60 + options.fetch(:hours, 0) * 3600 if seconds_to_advance.zero? datetime_advanced_by_date else datetime_advanced_by_date.since(seconds_to_advance) end end ``` Uses [`Date`](date) to provide precise [`Time`](time) calculations for years, months, and days. The `options` parameter takes a hash with any of these keys: `:years`, `:months`, `:weeks`, `:days`, `:hours`, `:minutes`, `:seconds`. ago(seconds) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 105 def ago(seconds) since(-seconds) end ``` Returns a new [`DateTime`](datetime) representing the time a number of seconds ago. Do not use this method in combination with x.months, use months\_ago instead! at\_beginning\_of\_day() Alias for: [beginning\_of\_day](datetime#method-i-beginning_of_day) at\_beginning\_of\_hour() Alias for: [beginning\_of\_hour](datetime#method-i-beginning_of_hour) at\_beginning\_of\_minute() Alias for: [beginning\_of\_minute](datetime#method-i-beginning_of_minute) at\_end\_of\_day() Alias for: [end\_of\_day](datetime#method-i-end_of_day) at\_end\_of\_hour() Alias for: [end\_of\_hour](datetime#method-i-end_of_hour) at\_end\_of\_minute() Alias for: [end\_of\_minute](datetime#method-i-end_of_minute) at\_midday() Alias for: [middle\_of\_day](datetime#method-i-middle_of_day) at\_middle\_of\_day() Alias for: [middle\_of\_day](datetime#method-i-middle_of_day) at\_midnight() Alias for: [beginning\_of\_day](datetime#method-i-beginning_of_day) at\_noon() Alias for: [middle\_of\_day](datetime#method-i-middle_of_day) beginning\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 118 def beginning_of_day change(hour: 0) end ``` Returns a new [`DateTime`](datetime) representing the start of the day (0:00). Also aliased as: [midnight](datetime#method-i-midnight), [at\_midnight](datetime#method-i-at_midnight), [at\_beginning\_of\_day](datetime#method-i-at_beginning_of_day) beginning\_of\_hour() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 142 def beginning_of_hour change(min: 0) end ``` Returns a new [`DateTime`](datetime) representing the start of the hour (hh:00:00). Also aliased as: [at\_beginning\_of\_hour](datetime#method-i-at_beginning_of_hour) beginning\_of\_minute() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 154 def beginning_of_minute change(sec: 0) end ``` Returns a new [`DateTime`](datetime) representing the start of the minute (hh:mm:00). Also aliased as: [at\_beginning\_of\_minute](datetime#method-i-at_beginning_of_minute) change(options) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 51 def change(options) if new_nsec = options[:nsec] raise ArgumentError, "Can't change both :nsec and :usec at the same time: #{options.inspect}" if options[:usec] new_fraction = Rational(new_nsec, 1000000000) else new_usec = options.fetch(:usec, (options[:hour] || options[:min] || options[:sec]) ? 0 : Rational(nsec, 1000)) new_fraction = Rational(new_usec, 1000000) end raise ArgumentError, "argument out of range" if new_fraction >= 1 ::DateTime.civil( options.fetch(:year, year), options.fetch(:month, month), options.fetch(:day, day), options.fetch(:hour, hour), options.fetch(:min, options[:hour] ? 0 : min), options.fetch(:sec, (options[:hour] || options[:min]) ? 0 : sec) + new_fraction, options.fetch(:offset, offset), options.fetch(:start, start) ) end ``` Returns a new [`DateTime`](datetime) where one or more of the elements have been changed according to the `options` parameter. The time options (`:hour`, `:min`, `:sec`) reset cascadingly, so if only the hour is passed, then minute and sec is set to 0. If the hour and minute is passed, then sec is set to 0. The `options` parameter takes a hash with any of these keys: `:year`, `:month`, `:day`, `:hour`, `:min`, `:sec`, `:offset`, `:start`. ``` DateTime.new(2012, 8, 29, 22, 35, 0).change(day: 1) # => DateTime.new(2012, 8, 1, 22, 35, 0) DateTime.new(2012, 8, 29, 22, 35, 0).change(year: 1981, day: 1) # => DateTime.new(1981, 8, 1, 22, 35, 0) DateTime.new(2012, 8, 29, 22, 35, 0).change(year: 1981, hour: 0) # => DateTime.new(1981, 8, 29, 0, 0, 0) ``` default\_inspect() Alias for: [inspect](datetime#method-i-inspect) end\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 136 def end_of_day change(hour: 23, min: 59, sec: 59, usec: Rational(999999999, 1000)) end ``` Returns a new [`DateTime`](datetime) representing the end of the day (23:59:59). Also aliased as: [at\_end\_of\_day](datetime#method-i-at_end_of_day) end\_of\_hour() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 148 def end_of_hour change(min: 59, sec: 59, usec: Rational(999999999, 1000)) end ``` Returns a new [`DateTime`](datetime) representing the end of the hour (hh:59:59). Also aliased as: [at\_end\_of\_hour](datetime#method-i-at_end_of_hour) end\_of\_minute() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 160 def end_of_minute change(sec: 59, usec: Rational(999999999, 1000)) end ``` Returns a new [`DateTime`](datetime) representing the end of the minute (hh:mm:59). Also aliased as: [at\_end\_of\_minute](datetime#method-i-at_end_of_minute) formatted\_offset(colon = true, alternate\_utc\_string = nil) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 51 def formatted_offset(colon = true, alternate_utc_string = nil) utc? && alternate_utc_string || ActiveSupport::TimeZone.seconds_to_utc_offset(utc_offset, colon) end ``` Returns a formatted string of the offset from UTC, or an alternative string if the time zone is already UTC. ``` datetime = DateTime.civil(2000, 1, 1, 0, 0, 0, Rational(-6, 24)) datetime.formatted_offset # => "-06:00" datetime.formatted_offset(false) # => "-0600" ``` getgm() Alias for: [utc](datetime#method-i-utc) getlocal(utc\_offset = nil) Alias for: [localtime](datetime#method-i-localtime) getutc() Alias for: [utc](datetime#method-i-utc) gmtime() Alias for: [utc](datetime#method-i-utc) in(seconds) Alias for: [since](datetime#method-i-since) inspect() Also aliased as: [default\_inspect](datetime#method-i-default_inspect) Alias for: [readable\_inspect](datetime#method-i-readable_inspect) localtime(utc\_offset = nil) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 166 def localtime(utc_offset = nil) utc = new_offset(0) Time.utc( utc.year, utc.month, utc.day, utc.hour, utc.min, utc.sec + utc.sec_fraction ).getlocal(utc_offset) end ``` Returns a `Time` instance of the simultaneous time in the system timezone. Also aliased as: [getlocal](datetime#method-i-getlocal) midday() Alias for: [middle\_of\_day](datetime#method-i-middle_of_day) middle\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 126 def middle_of_day change(hour: 12) end ``` Returns a new [`DateTime`](datetime) representing the middle of the day (12:00) Also aliased as: [midday](datetime#method-i-midday), [noon](datetime#method-i-noon), [at\_midday](datetime#method-i-at_midday), [at\_noon](datetime#method-i-at_noon), [at\_middle\_of\_day](datetime#method-i-at_middle_of_day) midnight() Alias for: [beginning\_of\_day](datetime#method-i-beginning_of_day) noon() Alias for: [middle\_of\_day](datetime#method-i-middle_of_day) nsec() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 94 def nsec (sec_fraction * 1_000_000_000).to_i end ``` Returns the fraction of a second as nanoseconds readable\_inspect() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 56 def readable_inspect to_formatted_s(:rfc822) end ``` Overrides the default inspect method with a human readable one, e.g., “Mon, 21 Feb 2005 14:30:00 +0000”. Also aliased as: [inspect](datetime#method-i-inspect) seconds\_since\_midnight() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 20 def seconds_since_midnight sec + (min * 60) + (hour * 3600) end ``` Returns the number of seconds since 00:00:00. ``` DateTime.new(2012, 8, 29, 0, 0, 0).seconds_since_midnight # => 0 DateTime.new(2012, 8, 29, 12, 34, 56).seconds_since_midnight # => 45296 DateTime.new(2012, 8, 29, 23, 59, 59).seconds_since_midnight # => 86399 ``` seconds\_until\_end\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 29 def seconds_until_end_of_day end_of_day.to_i - to_i end ``` Returns the number of seconds until 23:59:59. ``` DateTime.new(2012, 8, 29, 0, 0, 0).seconds_until_end_of_day # => 86399 DateTime.new(2012, 8, 29, 12, 34, 56).seconds_until_end_of_day # => 41103 DateTime.new(2012, 8, 29, 23, 59, 59).seconds_until_end_of_day # => 0 ``` since(seconds) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 112 def since(seconds) self + Rational(seconds, 86400) end ``` Returns a new [`DateTime`](datetime) representing the time a number of seconds since the instance time. Do not use this method in combination with x.months, use months\_since instead! Also aliased as: [in](datetime#method-i-in) subsec() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 36 def subsec sec_fraction end ``` Returns the fraction of a second as a `Rational` ``` DateTime.new(2012, 8, 29, 0, 0, 0.5).subsec # => (1/2) ``` to\_f() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 79 def to_f seconds_since_unix_epoch.to_f + sec_fraction end ``` Converts `self` to a floating-point number of seconds, including fractional microseconds, since the Unix epoch. to\_formatted\_s(format = :default) Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 35 def to_formatted_s(format = :default) if formatter = ::Time::DATE_FORMATS[format] formatter.respond_to?(:call) ? formatter.call(self).to_s : strftime(formatter) else to_default_s end end ``` Convert to a formatted string. See Time::DATE\_FORMATS for predefined formats. This method is aliased to `to_fs`. ### Examples ``` datetime = DateTime.civil(2007, 12, 4, 0, 0, 0, 0) # => Tue, 04 Dec 2007 00:00:00 +0000 datetime.to_formatted_s(:db) # => "2007-12-04 00:00:00" datetime.to_fs(:db) # => "2007-12-04 00:00:00" datetime.to_formatted_s(:number) # => "20071204000000" datetime.to_formatted_s(:short) # => "04 Dec 00:00" datetime.to_formatted_s(:long) # => "December 04, 2007 00:00" datetime.to_formatted_s(:long_ordinal) # => "December 4th, 2007 00:00" datetime.to_formatted_s(:rfc822) # => "Tue, 04 Dec 2007 00:00:00 +0000" datetime.to_formatted_s(:iso8601) # => "2007-12-04T00:00:00+00:00" ``` Adding your own datetime formats to [`to_formatted_s`](datetime#method-i-to_formatted_s) ---------------------------------------------------------------------------------------- [`DateTime`](datetime) formats are shared with [`Time`](time). You can add your own to the Time::DATE\_FORMATS hash. Use the format name as the hash key and either a strftime string or Proc instance that takes a time or datetime argument as the value. ``` # config/initializers/time_formats.rb Time::DATE_FORMATS[:month_and_year] = '%B %Y' Time::DATE_FORMATS[:short_ordinal] = lambda { |time| time.strftime("%B #{time.day.ordinalize}") } ``` Also aliased as: [to\_fs](datetime#method-i-to_fs) to\_fs(format = :default) Alias for: [to\_formatted\_s](datetime#method-i-to_formatted_s) to\_i() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 84 def to_i seconds_since_unix_epoch.to_i end ``` Converts `self` to an integer number of seconds since the Unix epoch. to\_time() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/compatibility.rb, line 15 def to_time preserve_timezone ? getlocal(utc_offset) : getlocal end ``` Either return an instance of `Time` with the same UTC offset as `self` or an instance of `Time` representing the same time in the local system timezone depending on the setting of on the setting of `ActiveSupport.to_time_preserves_timezone`. usec() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/conversions.rb, line 89 def usec (sec_fraction * 1_000_000).to_i end ``` Returns the fraction of a second as microseconds utc() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 180 def utc utc = new_offset(0) Time.utc( utc.year, utc.month, utc.day, utc.hour, utc.min, utc.sec + utc.sec_fraction ) end ``` Returns a `Time` instance of the simultaneous time in the UTC timezone. ``` DateTime.civil(2005, 2, 21, 10, 11, 12, Rational(-6, 24)) # => Mon, 21 Feb 2005 10:11:12 -0600 DateTime.civil(2005, 2, 21, 10, 11, 12, Rational(-6, 24)).utc # => Mon, 21 Feb 2005 16:11:12 UTC ``` Also aliased as: [getgm](datetime#method-i-getgm), [getutc](datetime#method-i-getutc), [gmtime](datetime#method-i-gmtime) utc?() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 193 def utc? offset == 0 end ``` Returns `true` if `offset == 0`. utc\_offset() Show source ``` # File activesupport/lib/active_support/core_ext/date_time/calculations.rb, line 198 def utc_offset (offset * 86400).to_i end ``` Returns the offset value in seconds.
programming_docs
rails class Date class Date =========== Parent: [Object](object) Included modules: [DateAndTime::Calculations](dateandtime/calculations), [DateAndTime::Zones](dateandtime/zones) DATE\_FORMATS beginning\_of\_week\_default[RW] beginning\_of\_week() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 19 def beginning_of_week ::ActiveSupport::IsolatedExecutionState[:beginning_of_week] || beginning_of_week_default || :monday end ``` Returns the week start (e.g. :monday) for the current request, if this has been set (via [`Date.beginning_of_week=`](date#method-c-beginning_of_week-3D)). If `Date.beginning_of_week` has not been set for the current request, returns the week start specified in `config.beginning_of_week`. If no config.beginning\_of\_week was specified, returns :monday. beginning\_of\_week=(week\_start) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 27 def beginning_of_week=(week_start) ::ActiveSupport::IsolatedExecutionState[:beginning_of_week] = find_beginning_of_week!(week_start) end ``` Sets `Date.beginning_of_week` to a week start (e.g. :monday) for current request/thread. This method accepts any of the following day symbols: :monday, :tuesday, :wednesday, :thursday, :friday, :saturday, :sunday current() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 48 def current ::Time.zone ? ::Time.zone.today : ::Date.today end ``` Returns [`Time.zone`](time#method-c-zone).today when `Time.zone` or `config.time_zone` are set, otherwise just returns Date.today. find\_beginning\_of\_week!(week\_start) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 32 def find_beginning_of_week!(week_start) raise ArgumentError, "Invalid beginning of week: #{week_start}" unless ::Date::DAYS_INTO_WEEK.key?(week_start) week_start end ``` Returns week start day symbol (e.g. :monday), or raises an `ArgumentError` for invalid day symbol. tomorrow() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 43 def tomorrow ::Date.current.tomorrow end ``` Returns a new [`Date`](date) representing the date 1 day after today (i.e. tomorrow's date). yesterday() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 38 def yesterday ::Date.current.yesterday end ``` Returns a new [`Date`](date) representing the date 1 day ago (i.e. yesterday's date). <=>(other) Also aliased as: [compare\_without\_coercion](date#method-i-compare_without_coercion) Alias for: [compare\_with\_coercion](date#method-i-compare_with_coercion) acts\_like\_date?() Show source ``` # File activesupport/lib/active_support/core_ext/date/acts_like.rb, line 7 def acts_like_date? true end ``` Duck-types as a Date-like class. See [`Object#acts_like?`](object#method-i-acts_like-3F). advance(options) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 112 def advance(options) d = self d = d >> options[:years] * 12 if options[:years] d = d >> options[:months] if options[:months] d = d + options[:weeks] * 7 if options[:weeks] d = d + options[:days] if options[:days] d end ``` Provides precise [`Date`](date) calculations for years, months, and days. The `options` parameter takes a hash with any of these keys: `:years`, `:months`, `:weeks`, `:days`. ago(seconds) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 55 def ago(seconds) in_time_zone.since(-seconds) end ``` Converts [`Date`](date) to a [`Time`](time) (or [`DateTime`](datetime) if necessary) with the time portion set to the beginning of the day (0:00) and then subtracts the specified number of seconds. at\_beginning\_of\_day() Alias for: [beginning\_of\_day](date#method-i-beginning_of_day) at\_end\_of\_day() Alias for: [end\_of\_day](date#method-i-end_of_day) at\_midday() Alias for: [middle\_of\_day](date#method-i-middle_of_day) at\_middle\_of\_day() Alias for: [middle\_of\_day](date#method-i-middle_of_day) at\_midnight() Alias for: [beginning\_of\_day](date#method-i-beginning_of_day) at\_noon() Alias for: [middle\_of\_day](date#method-i-middle_of_day) beginning\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 67 def beginning_of_day in_time_zone end ``` Converts [`Date`](date) to a [`Time`](time) (or [`DateTime`](datetime) if necessary) with the time portion set to the beginning of the day (0:00) Also aliased as: [midnight](date#method-i-midnight), [at\_midnight](date#method-i-at_midnight), [at\_beginning\_of\_day](date#method-i-at_beginning_of_day) change(options) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 128 def change(options) ::Date.new( options.fetch(:year, year), options.fetch(:month, month), options.fetch(:day, day) ) end ``` Returns a new [`Date`](date) where one or more of the elements have been changed according to the `options` parameter. The `options` parameter is a hash with a combination of these keys: `:year`, `:month`, `:day`. ``` Date.new(2007, 5, 12).change(day: 1) # => Date.new(2007, 5, 1) Date.new(2007, 5, 12).change(year: 2005, month: 1) # => Date.new(2005, 1, 12) ``` compare\_with\_coercion(other) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 137 def compare_with_coercion(other) if other.is_a?(Time) to_datetime <=> other else compare_without_coercion(other) end end ``` Allow [`Date`](date) to be compared with [`Time`](time) by converting to [`DateTime`](datetime) and relying on the <=> from there. Also aliased as: [<=>](date#method-i-3C-3D-3E) compare\_without\_coercion(other) Alias for: [<=>](date#method-i-3C-3D-3E) default\_inspect() Alias for: [inspect](date#method-i-inspect) end\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 85 def end_of_day in_time_zone.end_of_day end ``` Converts [`Date`](date) to a [`Time`](time) (or [`DateTime`](datetime) if necessary) with the time portion set to the end of the day (23:59:59) Also aliased as: [at\_end\_of\_day](date#method-i-at_end_of_day) in(seconds) Alias for: [since](date#method-i-since) inspect() Also aliased as: [default\_inspect](date#method-i-default_inspect) Alias for: [readable\_inspect](date#method-i-readable_inspect) midday() Alias for: [middle\_of\_day](date#method-i-middle_of_day) middle\_of\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 75 def middle_of_day in_time_zone.middle_of_day end ``` Converts [`Date`](date) to a [`Time`](time) (or [`DateTime`](datetime) if necessary) with the time portion set to the middle of the day (12:00) Also aliased as: [midday](date#method-i-midday), [noon](date#method-i-noon), [at\_midday](date#method-i-at_midday), [at\_noon](date#method-i-at_noon), [at\_middle\_of\_day](date#method-i-at_middle_of_day) midnight() Alias for: [beginning\_of\_day](date#method-i-beginning_of_day) noon() Alias for: [middle\_of\_day](date#method-i-middle_of_day) readable\_inspect() Show source ``` # File activesupport/lib/active_support/core_ext/date/conversions.rb, line 62 def readable_inspect strftime("%a, %d %b %Y") end ``` Overrides the default inspect method with a human readable one, e.g., “Mon, 21 Feb 2005” Also aliased as: [inspect](date#method-i-inspect) since(seconds) Show source ``` # File activesupport/lib/active_support/core_ext/date/calculations.rb, line 61 def since(seconds) in_time_zone.since(seconds) end ``` Converts [`Date`](date) to a [`Time`](time) (or [`DateTime`](datetime) if necessary) with the time portion set to the beginning of the day (0:00) and then adds the specified number of seconds Also aliased as: [in](date#method-i-in) to\_formatted\_s(format = :default) Show source ``` # File activesupport/lib/active_support/core_ext/date/conversions.rb, line 47 def to_formatted_s(format = :default) if formatter = DATE_FORMATS[format] if formatter.respond_to?(:call) formatter.call(self).to_s else strftime(formatter) end else to_default_s end end ``` Convert to a formatted string. See [`DATE_FORMATS`](date#DATE_FORMATS) for predefined formats. This method is aliased to `to_fs`. ``` date = Date.new(2007, 11, 10) # => Sat, 10 Nov 2007 date.to_formatted_s(:db) # => "2007-11-10" date.to_fs(:db) # => "2007-11-10" date.to_formatted_s(:short) # => "10 Nov" date.to_formatted_s(:number) # => "20071110" date.to_formatted_s(:long) # => "November 10, 2007" date.to_formatted_s(:long_ordinal) # => "November 10th, 2007" date.to_formatted_s(:rfc822) # => "10 Nov 2007" date.to_formatted_s(:iso8601) # => "2007-11-10" ``` Adding your own date formats to [`to_formatted_s`](date#method-i-to_formatted_s) -------------------------------------------------------------------------------- You can add your own formats to the [`Date::DATE_FORMATS`](date#DATE_FORMATS) hash. Use the format name as the hash key and either a strftime string or Proc instance that takes a date argument as the value. ``` # config/initializers/date_formats.rb Date::DATE_FORMATS[:month_and_year] = '%B %Y' Date::DATE_FORMATS[:short_ordinal] = ->(date) { date.strftime("%B #{date.day.ordinalize}") } ``` Also aliased as: [to\_fs](date#method-i-to_fs) to\_fs(format = :default) Alias for: [to\_formatted\_s](date#method-i-to_formatted_s) to\_time(form = :local) Show source ``` # File activesupport/lib/active_support/core_ext/date/conversions.rb, line 82 def to_time(form = :local) raise ArgumentError, "Expected :local or :utc, got #{form.inspect}." unless [:local, :utc].include?(form) ::Time.public_send(form, year, month, day) end ``` Converts a [`Date`](date) instance to a [`Time`](time), where the time is set to the beginning of the day. The timezone can be either :local or :utc (default :local). ``` date = Date.new(2007, 11, 10) # => Sat, 10 Nov 2007 date.to_time # => 2007-11-10 00:00:00 0800 date.to_time(:local) # => 2007-11-10 00:00:00 0800 date.to_time(:utc) # => 2007-11-10 00:00:00 UTC ``` NOTE: The :local timezone is Ruby's **process** timezone, i.e. ENV. ``` If the *application's* timezone is needed, then use +in_time_zone+ instead. ``` xmlschema() Show source ``` # File activesupport/lib/active_support/core_ext/date/conversions.rb, line 94 def xmlschema in_time_zone.xmlschema end ``` Returns a string which represents the time in used time zone as [`DateTime`](datetime) defined by XML Schema: ``` date = Date.new(2015, 05, 23) # => Sat, 23 May 2015 date.xmlschema # => "2015-05-23T00:00:00+04:00" ``` rails module ActiveModel module ActiveModel =================== gem\_version() Show source ``` # File activemodel/lib/active_model/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Active Model as a `Gem::Version` version() Show source ``` # File activemodel/lib/active_model/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded Active Model as a `Gem::Version` rails module ActionController module ActionController ======================== add\_renderer(key, &block) Show source ``` # File actionpack/lib/action_controller/metal/renderers.rb, line 7 def self.add_renderer(key, &block) Renderers.add(key, &block) end ``` See `Renderers.add` remove\_renderer(key) Show source ``` # File actionpack/lib/action_controller/metal/renderers.rb, line 12 def self.remove_renderer(key) Renderers.remove(key) end ``` See `Renderers.remove` rails module ActiveSupport module ActiveSupport ===================== gem\_version() Show source ``` # File activesupport/lib/active_support/gem_version.rb, line 5 def self.gem_version Gem::Version.new VERSION::STRING end ``` Returns the version of the currently loaded Active Support as a `Gem::Version`. version() Show source ``` # File activesupport/lib/active_support/version.rb, line 7 def self.version gem_version end ``` Returns the version of the currently loaded [`ActiveSupport`](activesupport) as a `Gem::Version` rails class ActiveSupport::CachingKeyGenerator class ActiveSupport::CachingKeyGenerator ========================================= Parent: [Object](../object) [`CachingKeyGenerator`](cachingkeygenerator) is a wrapper around [`KeyGenerator`](keygenerator) which allows users to avoid re-executing the key generation process when it's called using the same salt and key\_size. new(key\_generator) Show source ``` # File activesupport/lib/active_support/key_generator.rb, line 48 def initialize(key_generator) @key_generator = key_generator @cache_keys = Concurrent::Map.new end ``` generate\_key(\*args) Show source ``` # File activesupport/lib/active_support/key_generator.rb, line 54 def generate_key(*args) @cache_keys[args.join("|")] ||= @key_generator.generate_key(*args) end ``` Returns a derived key suitable for use. rails module ActiveSupport::Concern module ActiveSupport::Concern ============================== A typical module looks like this: ``` module M def self.included(base) base.extend ClassMethods base.class_eval do scope :disabled, -> { where(disabled: true) } end end module ClassMethods ... end end ``` By using `ActiveSupport::Concern` the above module could instead be written as: ``` require "active_support/concern" module M extend ActiveSupport::Concern included do scope :disabled, -> { where(disabled: true) } end class_methods do ... end end ``` Moreover, it gracefully handles module dependencies. Given a `Foo` module and a `Bar` module which depends on the former, we would typically write the following: ``` module Foo def self.included(base) base.class_eval do def self.method_injected_by_foo ... end end end end module Bar def self.included(base) base.method_injected_by_foo end end class Host include Foo # We need to include this dependency for Bar include Bar # Bar is the module that Host really needs end ``` But why should `Host` care about `Bar`'s dependencies, namely `Foo`? We could try to hide these from `Host` directly including `Foo` in `Bar`: ``` module Bar include Foo def self.included(base) base.method_injected_by_foo end end class Host include Bar end ``` Unfortunately this won't work, since when `Foo` is included, its `base` is the `Bar` module, not the `Host` class. With `ActiveSupport::Concern`, module dependencies are properly resolved: ``` require "active_support/concern" module Foo extend ActiveSupport::Concern included do def self.method_injected_by_foo ... end end end module Bar extend ActiveSupport::Concern include Foo included do self.method_injected_by_foo end end class Host include Bar # It works, now Bar takes care of its dependencies end ``` ### Prepending concerns Just like `include`, concerns also support `prepend` with a corresponding `prepended do` callback. `module ClassMethods` or `class_methods do` are prepended as well. `prepend` is also used for any dependencies. class\_methods(&class\_methods\_module\_definition) Show source ``` # File activesupport/lib/active_support/concern.rb, line 207 def class_methods(&class_methods_module_definition) mod = const_defined?(:ClassMethods, false) ? const_get(:ClassMethods) : const_set(:ClassMethods, Module.new) mod.module_eval(&class_methods_module_definition) end ``` Define class methods from given block. You can define private class methods as well. ``` module Example extend ActiveSupport::Concern class_methods do def foo; puts 'foo'; end private def bar; puts 'bar'; end end end class Buzz include Example end Buzz.foo # => "foo" Buzz.bar # => private method 'bar' called for Buzz:Class(NoMethodError) ``` included(base = nil, &block) Show source ``` # File activesupport/lib/active_support/concern.rb, line 156 def included(base = nil, &block) if base.nil? if instance_variable_defined?(:@_included_block) if @_included_block.source_location != block.source_location raise MultipleIncludedBlocks end else @_included_block = block end else super end end ``` Evaluate given block in context of base class, so that you can write class macros here. When you define more than one `included` block, it raises an exception. Calls superclass method prepended(base = nil, &block) Show source ``` # File activesupport/lib/active_support/concern.rb, line 173 def prepended(base = nil, &block) if base.nil? if instance_variable_defined?(:@_prepended_block) if @_prepended_block.source_location != block.source_location raise MultiplePrependBlocks end else @_prepended_block = block end else super end end ``` Evaluate given block in context of base class, so that you can write class macros here. When you define more than one `prepended` block, it raises an exception. Calls superclass method rails class ActiveSupport::LogSubscriber class ActiveSupport::LogSubscriber =================================== Parent: Subscriber `ActiveSupport::LogSubscriber` is an object set to consume `ActiveSupport::Notifications` with the sole purpose of logging them. The log subscriber dispatches notifications to a registered object based on its given namespace. An example would be Active Record log subscriber responsible for logging queries: ``` module ActiveRecord class LogSubscriber < ActiveSupport::LogSubscriber def sql(event) info "#{event.payload[:name]} (#{event.duration}) #{event.payload[:sql]}" end end end ``` And it's finally registered as: ``` ActiveRecord::LogSubscriber.attach_to :active_record ``` Since we need to know all instance methods before attaching the log subscriber, the line above should be called after your `ActiveRecord::LogSubscriber` definition. A logger also needs to be set with `ActiveRecord::LogSubscriber.logger=`. This is assigned automatically in a Rails environment. After configured, whenever a `"sql.active_record"` notification is published, it will properly dispatch the event (`ActiveSupport::Notifications::Event`) to the sql method. Being an `ActiveSupport::Notifications` consumer, `ActiveSupport::LogSubscriber` exposes a simple interface to check if instrumented code raises an exception. It is common to log a different message in case of an error, and this can be achieved by extending the previous example: ``` module ActiveRecord class LogSubscriber < ActiveSupport::LogSubscriber def sql(event) exception = event.payload[:exception] if exception exception_object = event.payload[:exception_object] error "[ERROR] #{event.payload[:name]}: #{exception.join(', ')} " \ "(#{exception_object.backtrace.first})" else # standard logger code end end end end ``` Log subscriber also has some helpers to deal with logging and automatically flushes all logs when the request finishes (via `action_dispatch.callback` notification) in a Rails environment. BLACK Colors BLUE BOLD CLEAR Embed in a [`String`](../string) to clear all previous ANSI sequences. CYAN GREEN MAGENTA RED WHITE YELLOW logger[W] flush\_all!() Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 96 def flush_all! logger.flush if logger.respond_to?(:flush) end ``` Flush all [`log_subscribers`](logsubscriber#method-c-log_subscribers)' logger. log\_subscribers() Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 91 def log_subscribers subscribers end ``` logger() Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 83 def logger @logger ||= if defined?(Rails) && Rails.respond_to?(:logger) Rails.logger end end ``` finish(name, id, payload) Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 114 def finish(name, id, payload) super if logger rescue => e log_exception(name, e) end ``` Calls superclass method logger() Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 106 def logger LogSubscriber.logger end ``` publish\_event(event) Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 120 def publish_event(event) super if logger rescue => e log_exception(event.name, e) end ``` Calls superclass method start(name, id, payload) Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 110 def start(name, id, payload) super if logger end ``` Calls superclass method color(text, color, bold = false) Show source ``` # File activesupport/lib/active_support/log_subscriber.rb, line 139 def color(text, color, bold = false) # :doc: return text unless colorize_logging color = self.class.const_get(color.upcase) if color.is_a?(Symbol) bold = bold ? BOLD : "" "#{bold}#{color}#{text}#{CLEAR}" end ``` Set color by using a symbol or one of the defined constants. If a third option is set to `true`, it also adds bold to the string. This is based on the Highline implementation and will automatically append [`CLEAR`](logsubscriber#CLEAR) to the end of the returned [`String`](../string).
programming_docs
rails class ActiveSupport::KeyGenerator class ActiveSupport::KeyGenerator ================================== Parent: [Object](../object) [`KeyGenerator`](keygenerator) is a simple wrapper around OpenSSL's implementation of PBKDF2. It can be used to derive a number of keys for various purposes from a given secret. This lets Rails applications have a single secure secret, but avoid reusing that key in multiple incompatible contexts. hash\_digest\_class() Show source ``` # File activesupport/lib/active_support/key_generator.rb, line 21 def hash_digest_class @hash_digest_class ||= OpenSSL::Digest::SHA1 end ``` hash\_digest\_class=(klass) Show source ``` # File activesupport/lib/active_support/key_generator.rb, line 13 def hash_digest_class=(klass) if klass.kind_of?(Class) && klass < OpenSSL::Digest @hash_digest_class = klass else raise ArgumentError, "#{klass} is expected to be an OpenSSL::Digest subclass" end end ``` new(secret, options = {}) Show source ``` # File activesupport/lib/active_support/key_generator.rb, line 26 def initialize(secret, options = {}) @secret = secret # The default iterations are higher than required for our key derivation uses # on the off chance someone uses this for password storage @iterations = options[:iterations] || 2**16 # Also allow configuration here so people can use this to build a rotation # scheme when switching the digest class. @hash_digest_class = options[:hash_digest_class] || self.class.hash_digest_class end ``` generate\_key(salt, key\_size = 64) Show source ``` # File activesupport/lib/active_support/key_generator.rb, line 39 def generate_key(salt, key_size = 64) OpenSSL::PKCS5.pbkdf2_hmac(@secret, salt, @iterations, key_size, @hash_digest_class.new) end ``` Returns a derived key suitable for use. The default key\_size is chosen to be compatible with the default settings of [`ActiveSupport::MessageVerifier`](messageverifier). i.e. OpenSSL::Digest::SHA1#block\_length rails class ActiveSupport::ExecutionWrapper class ActiveSupport::ExecutionWrapper ====================================== Parent: [Object](../object) Included modules: [ActiveSupport::Callbacks](callbacks) active[RW] error\_reporter() Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 112 def self.error_reporter @error_reporter ||= ActiveSupport::ErrorReporter.new end ``` register\_hook(hook, outer: false) Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 51 def self.register_hook(hook, outer: false) if outer to_run RunHook.new(hook), prepend: true to_complete :after, CompleteHook.new(hook) else to_run RunHook.new(hook) to_complete CompleteHook.new(hook) end end ``` Register an object to be invoked during both the `run` and `complete` steps. `hook.complete` will be passed the value returned from `hook.run`, and will only be invoked if `run` has previously been called. (Mostly, this means it won't be invoked if an exception occurs in a preceding `to_run` block; all ordinary `to_complete` blocks are invoked in that situation.) run!() Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 67 def self.run! if active? Null else new.tap do |instance| success = nil begin instance.run! success = true ensure instance.complete! unless success end end end end ``` Run this execution. Returns an instance, whose `complete!` method **must** be invoked after the work has been performed. Where possible, prefer `wrap`. to\_complete(\*args, &block) Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 22 def self.to_complete(*args, &block) set_callback(:complete, *args, &block) end ``` to\_run(\*args, &block) Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 18 def self.to_run(*args, &block) set_callback(:run, *args, &block) end ``` wrap() { || ... } Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 84 def self.wrap return yield if active? instance = run! begin yield rescue => error error_reporter.report(error, handled: false) raise ensure instance.complete! end end ``` Perform the work in the supplied block as an execution. complete!() Show source ``` # File activesupport/lib/active_support/execution_wrapper.rb, line 140 def complete! complete ensure self.class.active.delete(IsolatedExecutionState.unique_id) end ``` Complete this in-flight execution. This method **must** be called exactly once on the result of any call to `run!`. Where possible, prefer `wrap`. rails module ActiveSupport::Callbacks module ActiveSupport::Callbacks ================================ [`Callbacks`](callbacks) are code hooks that are run at key points in an object's life cycle. The typical use case is to have a base class define a set of callbacks relevant to the other functionality it supplies, so that subclasses can install callbacks that enhance or modify the base functionality without needing to override or redefine methods of the base class. Mixing in this module allows you to define the events in the object's life cycle that will support callbacks (via `ClassMethods.define_callbacks`), set the instance methods, procs, or callback objects to be called (via `ClassMethods.set_callback`), and run the installed callbacks at the appropriate times (via `run_callbacks`). By default callbacks are halted by throwing `:abort`. See `ClassMethods.define_callbacks` for details. Three kinds of callbacks are supported: before callbacks, run before a certain event; after callbacks, run after the event; and around callbacks, blocks that surround the event, triggering it when they yield. Callback code can be contained in instance methods, procs or lambdas, or callback objects that respond to certain predetermined methods. See `ClassMethods.set_callback` for details. ``` class Record include ActiveSupport::Callbacks define_callbacks :save def save run_callbacks :save do puts "- save" end end end class PersonRecord < Record set_callback :save, :before, :saving_message def saving_message puts "saving..." end set_callback :save, :after do |object| puts "saved" end end person = PersonRecord.new person.save ``` Output: ``` saving... - save saved ``` CALLBACK\_FILTER\_TYPES run\_callbacks(kind) { || ... } Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 95 def run_callbacks(kind) callbacks = __callbacks[kind.to_sym] if callbacks.empty? yield if block_given? else env = Filters::Environment.new(self, false, nil) next_sequence = callbacks.compile # Common case: no 'around' callbacks defined if next_sequence.final? next_sequence.invoke_before(env) env.value = !env.halted && (!block_given? || yield) next_sequence.invoke_after(env) env.value else invoke_sequence = Proc.new do skipped = nil while true current = next_sequence current.invoke_before(env) if current.final? env.value = !env.halted && (!block_given? || yield) elsif current.skip?(env) (skipped ||= []) << current next_sequence = next_sequence.nested next else next_sequence = next_sequence.nested begin target, block, method, *arguments = current.expand_call_template(env, invoke_sequence) target.send(method, *arguments, &block) ensure next_sequence = current end end current.invoke_after(env) skipped.pop.invoke_after(env) while skipped&.first break env.value end end invoke_sequence.call end end end ``` Runs the callbacks for the given event. Calls the before and around callbacks in the order they were set, yields the block (if given one), and then runs the after callbacks in reverse order. If the callback chain was halted, returns `false`. Otherwise returns the result of the block, `nil` if no callbacks have been set, or `true` if callbacks have been set but no block is given. ``` run_callbacks :save do save end ``` rails class ActiveSupport::Duration class ActiveSupport::Duration ============================== Parent: [Object](../object) Provides accurate date and time measurements using [`Date#advance`](../date#method-i-advance) and [`Time#advance`](../time#method-i-advance), respectively. It mainly supports the methods on [`Numeric`](../numeric). ``` 1.month.ago # equivalent to Time.now.advance(months: -1) ``` PARTS PARTS\_IN\_SECONDS SECONDS\_PER\_DAY SECONDS\_PER\_HOUR SECONDS\_PER\_MINUTE SECONDS\_PER\_MONTH SECONDS\_PER\_WEEK SECONDS\_PER\_YEAR VARIABLE\_PARTS value[R] build(value) Show source ``` # File activesupport/lib/active_support/duration.rb, line 188 def build(value) unless value.is_a?(::Numeric) raise TypeError, "can't build an #{self.name} from a #{value.class.name}" end parts = {} remainder_sign = value <=> 0 remainder = value.round(9).abs variable = false PARTS.each do |part| unless part == :seconds part_in_seconds = PARTS_IN_SECONDS[part] parts[part] = remainder.div(part_in_seconds) * remainder_sign remainder %= part_in_seconds unless parts[part].zero? variable ||= VARIABLE_PARTS.include?(part) end end end unless value == 0 parts[:seconds] = remainder * remainder_sign new(value, parts, variable) end ``` Creates a new [`Duration`](duration) from a seconds value that is converted to the individual parts: ``` ActiveSupport::Duration.build(31556952).parts # => {:years=>1} ActiveSupport::Duration.build(2716146).parts # => {:months=>1, :days=>1} ``` parse(iso8601duration) Show source ``` # File activesupport/lib/active_support/duration.rb, line 143 def parse(iso8601duration) parts = ISO8601Parser.new(iso8601duration).parse! new(calculate_total_seconds(parts), parts) end ``` Creates a new [`Duration`](duration) from string formatted according to ISO 8601 [`Duration`](duration). See [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601#Durations) for more information. This method allows negative parts to be present in pattern. If invalid string is provided, it will raise `ActiveSupport::Duration::ISO8601Parser::ParsingError`. %(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 306 def %(other) if Duration === other || Scalar === other Duration.build(value % other.value) elsif Numeric === other Duration.build(value % other) else raise_type_error(other) end end ``` Returns the modulo of this [`Duration`](duration) by another [`Duration`](duration) or [`Numeric`](../numeric). [`Numeric`](../numeric) values are treated as seconds. \*(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 281 def *(other) if Scalar === other || Duration === other Duration.new(value * other.value, @parts.transform_values { |number| number * other.value }, @variable || other.variable?) elsif Numeric === other Duration.new(value * other, @parts.transform_values { |number| number * other }, @variable) else raise_type_error(other) end end ``` Multiplies this [`Duration`](duration) by a [`Numeric`](../numeric) and returns a new [`Duration`](duration). +(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 262 def +(other) if Duration === other parts = @parts.merge(other._parts) do |_key, value, other_value| value + other_value end Duration.new(value + other.value, parts, @variable || other.variable?) else seconds = @parts.fetch(:seconds, 0) + other Duration.new(value + other, @parts.merge(seconds: seconds), @variable) end end ``` Adds another [`Duration`](duration) or a [`Numeric`](../numeric) to this [`Duration`](duration). [`Numeric`](../numeric) values are treated as seconds. -(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 276 def -(other) self + (-other) end ``` Subtracts another [`Duration`](duration) or a [`Numeric`](../numeric) from this [`Duration`](duration). [`Numeric`](../numeric) values are treated as seconds. /(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 292 def /(other) if Scalar === other Duration.new(value / other.value, @parts.transform_values { |number| number / other.value }, @variable) elsif Duration === other value / other.value elsif Numeric === other Duration.new(value / other, @parts.transform_values { |number| number / other }, @variable) else raise_type_error(other) end end ``` Divides this [`Duration`](duration) by a [`Numeric`](../numeric) and returns a new [`Duration`](duration). <=>(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 252 def <=>(other) if Duration === other value <=> other.value elsif Numeric === other value <=> other end end ``` Compares one [`Duration`](duration) with another or a [`Numeric`](../numeric) to this [`Duration`](duration). [`Numeric`](../numeric) values are treated as seconds. ==(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 335 def ==(other) if Duration === other other.value == value else other == value end end ``` Returns `true` if `other` is also a [`Duration`](duration) instance with the same `value`, or if `other == value`. after(time = ::Time.current) Alias for: [since](duration#method-i-since) ago(time = ::Time.current) Show source ``` # File activesupport/lib/active_support/duration.rb, line 438 def ago(time = ::Time.current) sum(-1, time) end ``` Calculates a new [`Time`](../time) or [`Date`](../date) that is as far in the past as this [`Duration`](duration) represents. Also aliased as: [until](duration#method-i-until), [before](duration#method-i-before) before(time = ::Time.current) Alias for: [ago](duration#method-i-ago) eql?(other) Show source ``` # File activesupport/lib/active_support/duration.rb, line 420 def eql?(other) Duration === other && other.value.eql?(value) end ``` Returns `true` if `other` is also a [`Duration`](duration) instance, which has the same parts as this one. from\_now(time = ::Time.current) Alias for: [since](duration#method-i-since) hash() Show source ``` # File activesupport/lib/active_support/duration.rb, line 424 def hash @value.hash end ``` in\_days() Show source ``` # File activesupport/lib/active_support/duration.rb, line 393 def in_days in_seconds / SECONDS_PER_DAY.to_f end ``` Returns the amount of days a duration covers as a float ``` 12.hours.in_days # => 0.5 ``` in\_hours() Show source ``` # File activesupport/lib/active_support/duration.rb, line 386 def in_hours in_seconds / SECONDS_PER_HOUR.to_f end ``` Returns the amount of hours a duration covers as a float ``` 1.day.in_hours # => 24.0 ``` in\_minutes() Show source ``` # File activesupport/lib/active_support/duration.rb, line 379 def in_minutes in_seconds / SECONDS_PER_MINUTE.to_f end ``` Returns the amount of minutes a duration covers as a float ``` 1.day.in_minutes # => 1440.0 ``` in\_months() Show source ``` # File activesupport/lib/active_support/duration.rb, line 407 def in_months in_seconds / SECONDS_PER_MONTH.to_f end ``` Returns the amount of months a duration covers as a float ``` 9.weeks.in_months # => 2.07 ``` in\_seconds() Alias for: [to\_i](duration#method-i-to_i) in\_weeks() Show source ``` # File activesupport/lib/active_support/duration.rb, line 400 def in_weeks in_seconds / SECONDS_PER_WEEK.to_f end ``` Returns the amount of weeks a duration covers as a float ``` 2.months.in_weeks # => 8.696 ``` in\_years() Show source ``` # File activesupport/lib/active_support/duration.rb, line 414 def in_years in_seconds / SECONDS_PER_YEAR.to_f end ``` Returns the amount of years a duration covers as a float ``` 30.days.in_years # => 0.082 ``` iso8601(precision: nil) Show source ``` # File activesupport/lib/active_support/duration.rb, line 467 def iso8601(precision: nil) ISO8601Serializer.new(self, precision: precision).serialize end ``` Build ISO 8601 [`Duration`](duration) string for this duration. The `precision` parameter can be used to limit seconds' precision of duration. parts() Show source ``` # File activesupport/lib/active_support/duration.rb, line 235 def parts @parts.dup end ``` Returns a copy of the parts hash that defines the duration since(time = ::Time.current) Show source ``` # File activesupport/lib/active_support/duration.rb, line 430 def since(time = ::Time.current) sum(1, time) end ``` Calculates a new [`Time`](../time) or [`Date`](../date) that is as far in the future as this [`Duration`](duration) represents. Also aliased as: [from\_now](duration#method-i-from_now), [after](duration#method-i-after) to\_i() Show source ``` # File activesupport/lib/active_support/duration.rb, line 371 def to_i @value.to_i end ``` Returns the number of seconds that this [`Duration`](duration) represents. ``` 1.minute.to_i # => 60 1.hour.to_i # => 3600 1.day.to_i # => 86400 ``` Note that this conversion makes some assumptions about the duration of some periods, e.g. months are always 1/12 of year and years are 365.2425 days: ``` # equivalent to (1.year / 12).to_i 1.month.to_i # => 2629746 # equivalent to 365.2425.days.to_i 1.year.to_i # => 31556952 ``` In such cases, Ruby's core [Date](https://ruby-doc.org/stdlib/libdoc/date/rdoc/Date.html) and [Time](https://ruby-doc.org/stdlib/libdoc/time/rdoc/Time.html) should be used for precision date and time arithmetic. Also aliased as: [in\_seconds](duration#method-i-in_seconds) to\_s() Show source ``` # File activesupport/lib/active_support/duration.rb, line 347 def to_s @value.to_s end ``` Returns the amount of seconds a duration covers as a string. For more information check [`to_i`](duration#method-i-to_i) method. ``` 1.day.to_s # => "86400" ``` until(time = ::Time.current) Alias for: [ago](duration#method-i-ago) rails class ActiveSupport::Subscriber class ActiveSupport::Subscriber ================================ Parent: [Object](../object) [`ActiveSupport::Subscriber`](subscriber) is an object set to consume [`ActiveSupport::Notifications`](notifications). The subscriber dispatches notifications to a registered object based on its given namespace. An example would be an Active Record subscriber responsible for collecting statistics about queries: ``` module ActiveRecord class StatsSubscriber < ActiveSupport::Subscriber attach_to :active_record def sql(event) Statsd.timing("sql.#{event.payload[:name]}", event.duration) end end end ``` After configured, whenever a “sql.active\_record” notification is published, it will properly dispatch the event ([`ActiveSupport::Notifications::Event`](notifications/event)) to the `sql` method. We can detach a subscriber as well: ``` ActiveRecord::StatsSubscriber.detach_from(:active_record) ``` attach\_to(namespace, subscriber = new, notifier = ActiveSupport::Notifications, inherit\_all: false) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 33 def attach_to(namespace, subscriber = new, notifier = ActiveSupport::Notifications, inherit_all: false) @namespace = namespace @subscriber = subscriber @notifier = notifier @inherit_all = inherit_all subscribers << subscriber # Add event subscribers for all existing methods on the class. fetch_public_methods(subscriber, inherit_all).each do |event| add_event_subscriber(event) end end ``` Attach the subscriber to a namespace. detach\_from(namespace, notifier = ActiveSupport::Notifications) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 48 def detach_from(namespace, notifier = ActiveSupport::Notifications) @namespace = namespace @subscriber = find_attached_subscriber @notifier = notifier return unless subscriber subscribers.delete(subscriber) # Remove event subscribers of all existing methods on the class. fetch_public_methods(subscriber, true).each do |event| remove_event_subscriber(event) end # Reset notifier so that event subscribers will not add for new methods added to the class. @notifier = nil end ``` Detach the subscriber from a namespace. method\_added(event) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 67 def method_added(event) # Only public methods are added as subscribers, and only if a notifier # has been set up. This means that subscribers will only be set up for # classes that call #attach_to. if public_method_defined?(event) && notifier add_event_subscriber(event) end end ``` Adds event subscribers for all new methods added to the class. new() Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 128 def initialize @queue_key = [self.class.name, object_id].join "-" @patterns = {} super end ``` Calls superclass method subscribers() Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 76 def subscribers @@subscribers ||= [] end ``` add\_event\_subscriber(event) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 83 def add_event_subscriber(event) # :doc: return if invalid_event?(event) pattern = prepare_pattern(event) # Don't add multiple subscribers (e.g. if methods are redefined). return if pattern_subscribed?(pattern) subscriber.patterns[pattern] = notifier.subscribe(pattern, subscriber) end ``` remove\_event\_subscriber(event) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 94 def remove_event_subscriber(event) # :doc: return if invalid_event?(event) pattern = prepare_pattern(event) return unless pattern_subscribed?(pattern) notifier.unsubscribe(subscriber.patterns[pattern]) subscriber.patterns.delete(pattern) end ``` finish(name, id, payload) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 143 def finish(name, id, payload) event = event_stack.pop event.finish! event.payload.merge!(payload) method = name.split(".").first send(method, event) end ``` start(name, id, payload) Show source ``` # File activesupport/lib/active_support/subscriber.rb, line 134 def start(name, id, payload) event = ActiveSupport::Notifications::Event.new(name, nil, nil, id, payload) event.start! parent = event_stack.last parent << event if parent event_stack.push event end ```
programming_docs
rails module ActiveSupport::NumericWithFormat module ActiveSupport::NumericWithFormat ======================================== to\_formatted\_s(format = nil, options = nil) Show source ``` # File activesupport/lib/active_support/core_ext/numeric/conversions.rb, line 111 def to_formatted_s(format = nil, options = nil) return to_s if format.nil? case format when Integer, String to_s(format) when :phone ActiveSupport::NumberHelper.number_to_phone(self, options || {}) when :currency ActiveSupport::NumberHelper.number_to_currency(self, options || {}) when :percentage ActiveSupport::NumberHelper.number_to_percentage(self, options || {}) when :delimited ActiveSupport::NumberHelper.number_to_delimited(self, options || {}) when :rounded ActiveSupport::NumberHelper.number_to_rounded(self, options || {}) when :human ActiveSupport::NumberHelper.number_to_human(self, options || {}) when :human_size ActiveSupport::NumberHelper.number_to_human_size(self, options || {}) when Symbol to_s else to_s(format) end end ``` Provides options for converting numbers into formatted strings. Options are provided for phone numbers, currency, percentage, precision, positional notation, file size and pretty printing. This method is aliased to `to_fs`. #### Options For details on which formats use which options, see [`ActiveSupport::NumberHelper`](numberhelper) #### Examples ``` Phone Numbers: 5551234.to_formatted_s(:phone) # => "555-1234" 1235551234.to_formatted_s(:phone) # => "123-555-1234" 1235551234.to_formatted_s(:phone, area_code: true) # => "(123) 555-1234" 1235551234.to_formatted_s(:phone, delimiter: ' ') # => "123 555 1234" 1235551234.to_formatted_s(:phone, area_code: true, extension: 555) # => "(123) 555-1234 x 555" 1235551234.to_formatted_s(:phone, country_code: 1) # => "+1-123-555-1234" 1235551234.to_formatted_s(:phone, country_code: 1, extension: 1343, delimiter: '.') # => "+1.123.555.1234 x 1343" Currency: 1234567890.50.to_formatted_s(:currency) # => "$1,234,567,890.50" 1234567890.506.to_formatted_s(:currency) # => "$1,234,567,890.51" 1234567890.506.to_formatted_s(:currency, precision: 3) # => "$1,234,567,890.506" 1234567890.506.to_formatted_s(:currency, round_mode: :down) # => "$1,234,567,890.50" 1234567890.506.to_formatted_s(:currency, locale: :fr) # => "1 234 567 890,51 €" -1234567890.50.to_formatted_s(:currency, negative_format: '(%u%n)') # => "($1,234,567,890.50)" 1234567890.50.to_formatted_s(:currency, unit: '&pound;', separator: ',', delimiter: '') # => "&pound;1234567890,50" 1234567890.50.to_formatted_s(:currency, unit: '&pound;', separator: ',', delimiter: '', format: '%n %u') # => "1234567890,50 &pound;" Percentage: 100.to_formatted_s(:percentage) # => "100.000%" 100.to_formatted_s(:percentage, precision: 0) # => "100%" 1000.to_formatted_s(:percentage, delimiter: '.', separator: ',') # => "1.000,000%" 302.24398923423.to_formatted_s(:percentage, precision: 5) # => "302.24399%" 302.24398923423.to_formatted_s(:percentage, round_mode: :down) # => "302.243%" 1000.to_formatted_s(:percentage, locale: :fr) # => "1 000,000%" 100.to_formatted_s(:percentage, format: '%n %') # => "100.000 %" Delimited: 12345678.to_formatted_s(:delimited) # => "12,345,678" 12345678.05.to_formatted_s(:delimited) # => "12,345,678.05" 12345678.to_formatted_s(:delimited, delimiter: '.') # => "12.345.678" 12345678.to_formatted_s(:delimited, delimiter: ',') # => "12,345,678" 12345678.05.to_formatted_s(:delimited, separator: ' ') # => "12,345,678 05" 12345678.05.to_formatted_s(:delimited, locale: :fr) # => "12 345 678,05" 98765432.98.to_formatted_s(:delimited, delimiter: ' ', separator: ',') # => "98 765 432,98" Rounded: 111.2345.to_formatted_s(:rounded) # => "111.235" 111.2345.to_formatted_s(:rounded, precision: 2) # => "111.23" 111.2345.to_formatted_s(:rounded, precision: 2, round_mode: :up) # => "111.24" 13.to_formatted_s(:rounded, precision: 5) # => "13.00000" 389.32314.to_formatted_s(:rounded, precision: 0) # => "389" 111.2345.to_formatted_s(:rounded, significant: true) # => "111" 111.2345.to_formatted_s(:rounded, precision: 1, significant: true) # => "100" 13.to_formatted_s(:rounded, precision: 5, significant: true) # => "13.000" 111.234.to_formatted_s(:rounded, locale: :fr) # => "111,234" 13.to_formatted_s(:rounded, precision: 5, significant: true, strip_insignificant_zeros: true) # => "13" 389.32314.to_formatted_s(:rounded, precision: 4, significant: true) # => "389.3" 1111.2345.to_formatted_s(:rounded, precision: 2, separator: ',', delimiter: '.') # => "1.111,23" Human-friendly size in Bytes: 123.to_formatted_s(:human_size) # => "123 Bytes" 1234.to_formatted_s(:human_size) # => "1.21 KB" 12345.to_formatted_s(:human_size) # => "12.1 KB" 1234567.to_formatted_s(:human_size) # => "1.18 MB" 1234567890.to_formatted_s(:human_size) # => "1.15 GB" 1234567890123.to_formatted_s(:human_size) # => "1.12 TB" 1234567890123456.to_formatted_s(:human_size) # => "1.1 PB" 1234567890123456789.to_formatted_s(:human_size) # => "1.07 EB" 1234567.to_formatted_s(:human_size, precision: 2) # => "1.2 MB" 1234567.to_formatted_s(:human_size, precision: 2, round_mode: :up) # => "1.3 MB" 483989.to_formatted_s(:human_size, precision: 2) # => "470 KB" 1234567.to_formatted_s(:human_size, precision: 2, separator: ',') # => "1,2 MB" 1234567890123.to_formatted_s(:human_size, precision: 5) # => "1.1228 TB" 524288000.to_formatted_s(:human_size, precision: 5) # => "500 MB" Human-friendly format: 123.to_formatted_s(:human) # => "123" 1234.to_formatted_s(:human) # => "1.23 Thousand" 12345.to_formatted_s(:human) # => "12.3 Thousand" 1234567.to_formatted_s(:human) # => "1.23 Million" 1234567890.to_formatted_s(:human) # => "1.23 Billion" 1234567890123.to_formatted_s(:human) # => "1.23 Trillion" 1234567890123456.to_formatted_s(:human) # => "1.23 Quadrillion" 1234567890123456789.to_formatted_s(:human) # => "1230 Quadrillion" 489939.to_formatted_s(:human, precision: 2) # => "490 Thousand" 489939.to_formatted_s(:human, precision: 2, round_mode: :down) # => "480 Thousand" 489939.to_formatted_s(:human, precision: 4) # => "489.9 Thousand" 1234567.to_formatted_s(:human, precision: 4, significant: false) # => "1.2346 Million" 1234567.to_formatted_s(:human, precision: 1, separator: ',', significant: false) # => "1,2 Million" ``` Also aliased as: [to\_fs](numericwithformat#method-i-to_fs) to\_fs(format = nil, options = nil) Alias for: [to\_formatted\_s](numericwithformat#method-i-to_formatted_s) rails module ActiveSupport::RangeWithFormat module ActiveSupport::RangeWithFormat ====================================== RANGE\_FORMATS to\_formatted\_s(format = :default) Show source ``` # File activesupport/lib/active_support/core_ext/range/conversions.rb, line 30 def to_formatted_s(format = :default) if formatter = RANGE_FORMATS[format] formatter.call(first, last) else to_s end end ``` Convert range to a formatted string. See [`RANGE_FORMATS`](rangewithformat#RANGE_FORMATS) for predefined formats. This method is aliased to `to_fs`. ``` range = (1..100) # => 1..100 range.to_s # => "1..100" range.to_formatted_s(:db) # => "BETWEEN '1' AND '100'" ``` Adding your own range formats to to\_s -------------------------------------- You can add your own formats to the Range::RANGE\_FORMATS hash. Use the format name as the hash key and a Proc instance. ``` # config/initializers/range_formats.rb Range::RANGE_FORMATS[:short] = ->(start, stop) { "Between #{start.to_formatted_s(:db)} and #{stop.to_formatted_s(:db)}" } ``` Also aliased as: [to\_fs](rangewithformat#method-i-to_fs) to\_fs(format = :default) Alias for: [to\_formatted\_s](rangewithformat#method-i-to_formatted_s) rails module ActiveSupport::Gzip module ActiveSupport::Gzip =========================== A convenient wrapper for the zlib standard library that allows compression/decompression of strings with gzip. ``` gzip = ActiveSupport::Gzip.compress('compress me!') # => "\x1F\x8B\b\x00o\x8D\xCDO\x00\x03K\xCE\xCF-(J-.V\xC8MU\x04\x00R>n\x83\f\x00\x00\x00" ActiveSupport::Gzip.decompress(gzip) # => "compress me!" ``` compress(source, level = Zlib::DEFAULT\_COMPRESSION, strategy = Zlib::DEFAULT\_STRATEGY) Show source ``` # File activesupport/lib/active_support/gzip.rb, line 30 def self.compress(source, level = Zlib::DEFAULT_COMPRESSION, strategy = Zlib::DEFAULT_STRATEGY) output = Stream.new gz = Zlib::GzipWriter.new(output, level, strategy) gz.write(source) gz.close output.string end ``` Compresses a string using gzip. decompress(source) Show source ``` # File activesupport/lib/active_support/gzip.rb, line 25 def self.decompress(source) Zlib::GzipReader.wrap(StringIO.new(source), &:read) end ``` Decompresses a gzipped string. rails class ActiveSupport::ParameterFilter class ActiveSupport::ParameterFilter ===================================== Parent: [Object](../object) `ParameterFilter` allows you to specify keys for sensitive data from hash-like object and replace corresponding value. Filtering only certain sub-keys from a hash is possible by using the dot notation: 'credit\_card.number'. If a proc is given, each key and value of a hash and all sub-hashes are passed to it, where the value or the key can be replaced using String#replace or similar methods. ``` ActiveSupport::ParameterFilter.new([:password]) => replaces the value to all keys matching /password/i with "[FILTERED]" ActiveSupport::ParameterFilter.new([:foo, "bar"]) => replaces the value to all keys matching /foo|bar/i with "[FILTERED]" ActiveSupport::ParameterFilter.new([/\Apin\z/i, /\Apin_/i]) => replaces the value for the exact (case-insensitive) key 'pin' and all (case-insensitive) keys beginning with 'pin_', with "[FILTERED]". Does not match keys with 'pin' as a substring, such as 'shipping_id'. ActiveSupport::ParameterFilter.new(["credit_card.code"]) => replaces { credit_card: {code: "xxxx"} } with "[FILTERED]", does not change { file: { code: "xxxx"} } ActiveSupport::ParameterFilter.new([-> (k, v) do v.reverse! if /secret/i.match?(k) end]) => reverses the value to all keys matching /secret/i ``` new(filters = [], mask: FILTERED) Show source ``` # File activesupport/lib/active_support/parameter_filter.rb, line 42 def initialize(filters = [], mask: FILTERED) @filters = filters @mask = mask end ``` Create instance with given filters. Supported type of filters are `String`, `Regexp`, and `Proc`. Other types of filters are treated as `String` using `to_s`. For `Proc` filters, key, value, and optional original hash is passed to block arguments. #### Options * `:mask` - A replaced object when filtered. Defaults to `"[FILTERED]"`. filter(params) Show source ``` # File activesupport/lib/active_support/parameter_filter.rb, line 48 def filter(params) compiled_filter.call(params) end ``` Mask value of `params` if key matches one of filters. filter\_param(key, value) Show source ``` # File activesupport/lib/active_support/parameter_filter.rb, line 53 def filter_param(key, value) @filters.empty? ? value : compiled_filter.value_for_key(key, value) end ``` Returns filtered value for given key. For `Proc` filters, third block argument is not populated. rails module ActiveSupport::JSON module ActiveSupport::JSON =========================== DATETIME\_REGEX DATE\_REGEX matches YAML-formatted dates decode(json) Show source ``` # File activesupport/lib/active_support/json/decoding.rb, line 22 def decode(json) data = ::JSON.parse(json, quirks_mode: true) if ActiveSupport.parse_json_times convert_dates_from(data) else data end end ``` Parses a [`JSON`](json) string (JavaScript [`Object`](../object) Notation) into a hash. See [www.json.org](http://www.json.org) for more info. ``` ActiveSupport::JSON.decode("{\"team\":\"rails\",\"players\":\"36\"}") => {"team" => "rails", "players" => "36"} ``` encode(value, options = nil) Show source ``` # File activesupport/lib/active_support/json/encoding.rb, line 21 def self.encode(value, options = nil) Encoding.json_encoder.new(options).encode(value) end ``` Dumps objects in [`JSON`](json) (JavaScript [`Object`](../object) Notation). See [www.json.org](http://www.json.org) for more info. ``` ActiveSupport::JSON.encode({ team: 'rails', players: '36' }) # => "{\"team\":\"rails\",\"players\":\"36\"}" ``` parse\_error() Show source ``` # File activesupport/lib/active_support/json/decoding.rb, line 42 def parse_error ::JSON::ParserError end ``` Returns the class of the error that will be raised when there is an error in decoding [`JSON`](json). Using this method means you won't directly depend on the ActiveSupport's [`JSON`](json) implementation, in case it changes in the future. ``` begin obj = ActiveSupport::JSON.decode(some_string) rescue ActiveSupport::JSON.parse_error Rails.logger.warn("Attempted to decode invalid JSON: #{some_string}") end ``` rails class ActiveSupport::Deprecation class ActiveSupport::Deprecation ================================= Parent: [Object](../object) Included modules: [Singleton](../singleton), [ActiveSupport::Deprecation::Behavior](deprecation/behavior), [ActiveSupport::Deprecation::Reporting](deprecation/reporting), [ActiveSupport::Deprecation::Disallowed](deprecation/disallowed), [ActiveSupport::Deprecation::MethodWrapper](deprecation/methodwrapper) Deprecation specifies the API used by Rails to deprecate methods, instance variables, objects and constants. DEFAULT\_BEHAVIORS Default warning behaviors per [`Rails.env`](../rails#method-c-env). deprecation\_horizon[RW] The version number in which the deprecated behavior will be removed, by default. new(deprecation\_horizon = "7.1", gem\_name = "Rails") Show source ``` # File activesupport/lib/active_support/deprecation.rb, line 41 def initialize(deprecation_horizon = "7.1", gem_name = "Rails") self.gem_name = gem_name self.deprecation_horizon = deprecation_horizon # By default, warnings are not silenced and debugging is off. self.silenced = false self.debug = false @silenced_thread = Concurrent::ThreadLocalVar.new(false) @explicitly_allowed_warnings = Concurrent::ThreadLocalVar.new(nil) end ``` It accepts two parameters on initialization. The first is a version of library and the second is a library name. ``` ActiveSupport::Deprecation.new('2.0', 'MyLibrary') ``` rails module ActiveSupport::Notifications module ActiveSupport::Notifications ==================================== [`Notifications`](notifications) ================================ `ActiveSupport::Notifications` provides an instrumentation API for Ruby. Instrumenters ------------- To instrument an event you just need to do: ``` ActiveSupport::Notifications.instrument('render', extra: :information) do render plain: 'Foo' end ``` That first executes the block and then notifies all subscribers once done. In the example above `render` is the name of the event, and the rest is called the *payload*. The payload is a mechanism that allows instrumenters to pass extra information to subscribers. Payloads consist of a hash whose contents are arbitrary and generally depend on the event. Subscribers ----------- You can consume those events and the information they provide by registering a subscriber. ``` ActiveSupport::Notifications.subscribe('render') do |name, start, finish, id, payload| name # => String, name of the event (such as 'render' from above) start # => Time, when the instrumented block started execution finish # => Time, when the instrumented block ended execution id # => String, unique ID for the instrumenter that fired the event payload # => Hash, the payload end ``` Here, the `start` and `finish` values represent wall-clock time. If you are concerned about accuracy, you can register a monotonic subscriber. ``` ActiveSupport::Notifications.monotonic_subscribe('render') do |name, start, finish, id, payload| name # => String, name of the event (such as 'render' from above) start # => Monotonic time, when the instrumented block started execution finish # => Monotonic time, when the instrumented block ended execution id # => String, unique ID for the instrumenter that fired the event payload # => Hash, the payload end ``` The `start` and `finish` values above represent monotonic time. For instance, let's store all “render” events in an array: ``` events = [] ActiveSupport::Notifications.subscribe('render') do |*args| events << ActiveSupport::Notifications::Event.new(*args) end ``` That code returns right away, you are just subscribing to “render” events. The block is saved and will be called whenever someone instruments “render”: ``` ActiveSupport::Notifications.instrument('render', extra: :information) do render plain: 'Foo' end event = events.first event.name # => "render" event.duration # => 10 (in milliseconds) event.payload # => { extra: :information } ``` The block in the `subscribe` call gets the name of the event, start timestamp, end timestamp, a string with a unique identifier for that event's instrumenter (something like “535801666f04d0298cd6”), and a hash with the payload, in that order. If an exception happens during that particular instrumentation the payload will have a key `:exception` with an array of two elements as value: a string with the name of the exception class, and the exception message. The `:exception_object` key of the payload will have the exception itself as the value: ``` event.payload[:exception] # => ["ArgumentError", "Invalid value"] event.payload[:exception_object] # => #<ArgumentError: Invalid value> ``` As the earlier example depicts, the class `ActiveSupport::Notifications::Event` is able to take the arguments as they come and provide an object-oriented interface to that data. It is also possible to pass an object which responds to `call` method as the second parameter to the `subscribe` method instead of a block: ``` module ActionController class PageRequest def call(name, started, finished, unique_id, payload) Rails.logger.debug ['notification:', name, started, finished, unique_id, payload].join(' ') end end end ActiveSupport::Notifications.subscribe('process_action.action_controller', ActionController::PageRequest.new) ``` resulting in the following output within the logs including a hash with the payload: ``` notification: process_action.action_controller 2012-04-13 01:08:35 +0300 2012-04-13 01:08:35 +0300 af358ed7fab884532ec7 { controller: "Devise::SessionsController", action: "new", params: {"action"=>"new", "controller"=>"devise/sessions"}, format: :html, method: "GET", path: "/login/sign_in", status: 200, view_runtime: 279.3080806732178, db_runtime: 40.053 } ``` You can also subscribe to all events whose name matches a certain regexp: ``` ActiveSupport::Notifications.subscribe(/render/) do |*args| ... end ``` and even pass no argument to `subscribe`, in which case you are subscribing to all events. Temporary Subscriptions ----------------------- Sometimes you do not want to subscribe to an event for the entire life of the application. There are two ways to unsubscribe. WARNING: The instrumentation framework is designed for long-running subscribers, use this feature sparingly because it wipes some internal caches and that has a negative impact on performance. ### Subscribe While a Block Runs You can subscribe to some event temporarily while some block runs. For example, in ``` callback = lambda {|*args| ... } ActiveSupport::Notifications.subscribed(callback, "sql.active_record") do ... end ``` the callback will be called for all “sql.active\_record” events instrumented during the execution of the block. The callback is unsubscribed automatically after that. To record `started` and `finished` values with monotonic time, specify the optional `:monotonic` option to the `subscribed` method. The `:monotonic` option is set to `false` by default. ``` callback = lambda {|name, started, finished, unique_id, payload| ... } ActiveSupport::Notifications.subscribed(callback, "sql.active_record", monotonic: true) do ... end ``` ### Manual Unsubscription The `subscribe` method returns a subscriber object: ``` subscriber = ActiveSupport::Notifications.subscribe("render") do |*args| ... end ``` To prevent that block from being called anymore, just unsubscribe passing that reference: ``` ActiveSupport::Notifications.unsubscribe(subscriber) ``` You can also unsubscribe by passing the name of the subscriber object. Note that this will unsubscribe all subscriptions with the given name: ``` ActiveSupport::Notifications.unsubscribe("render") ``` Subscribers using a regexp or other pattern-matching object will remain subscribed to all events that match their original pattern, unless those events match a string passed to `unsubscribe`: ``` subscriber = ActiveSupport::Notifications.subscribe(/render/) { } ActiveSupport::Notifications.unsubscribe('render_template.action_view') subscriber.matches?('render_template.action_view') # => false subscriber.matches?('render_partial.action_view') # => true ``` Default Queue ------------- [`Notifications`](notifications) ships with a queue implementation that consumes and publishes events to all log subscribers. You can use any queue implementation you want. notifier[RW] instrument(name, payload = {}) { |payload| ... } Show source ``` # File activesupport/lib/active_support/notifications.rb, line 204 def instrument(name, payload = {}) if notifier.listening?(name) instrumenter.instrument(name, payload) { yield payload if block_given? } else yield payload if block_given? end end ``` instrumenter() Show source ``` # File activesupport/lib/active_support/notifications.rb, line 262 def instrumenter registry[notifier] ||= Instrumenter.new(notifier) end ``` monotonic\_subscribe(pattern = nil, callback = nil, &block) Show source ``` # File activesupport/lib/active_support/notifications.rb, line 247 def monotonic_subscribe(pattern = nil, callback = nil, &block) notifier.subscribe(pattern, callback, monotonic: true, &block) end ``` publish(name, \*args) Show source ``` # File activesupport/lib/active_support/notifications.rb, line 196 def publish(name, *args) notifier.publish(name, *args) end ``` subscribe(pattern = nil, callback = nil, &block) Show source ``` # File activesupport/lib/active_support/notifications.rb, line 243 def subscribe(pattern = nil, callback = nil, &block) notifier.subscribe(pattern, callback, monotonic: false, &block) end ``` Subscribe to a given event name with the passed `block`. You can subscribe to events by passing a [`String`](../string) to match exact event names, or by passing a [`Regexp`](../regexp) to match all events that match a pattern. ``` ActiveSupport::Notifications.subscribe(/render/) do |*args| @event = ActiveSupport::Notifications::Event.new(*args) end ``` The `block` will receive five parameters with information about the event: ``` ActiveSupport::Notifications.subscribe('render') do |name, start, finish, id, payload| name # => String, name of the event (such as 'render' from above) start # => Time, when the instrumented block started execution finish # => Time, when the instrumented block ended execution id # => String, unique ID for the instrumenter that fired the event payload # => Hash, the payload end ``` If the block passed to the method only takes one parameter, it will yield an event object to the block: ``` ActiveSupport::Notifications.subscribe(/render/) do |event| @event = event end ``` Raises an error if invalid event name type is passed: ``` ActiveSupport::Notifications.subscribe(:render) {|*args| ...} #=> ArgumentError (pattern must be specified as a String, Regexp or empty) ``` subscribed(callback, pattern = nil, monotonic: false) { || ... } Show source ``` # File activesupport/lib/active_support/notifications.rb, line 251 def subscribed(callback, pattern = nil, monotonic: false, &block) subscriber = notifier.subscribe(pattern, callback, monotonic: monotonic) yield ensure unsubscribe(subscriber) end ``` unsubscribe(subscriber\_or\_name) Show source ``` # File activesupport/lib/active_support/notifications.rb, line 258 def unsubscribe(subscriber_or_name) notifier.unsubscribe(subscriber_or_name) end ```
programming_docs
rails class ActiveSupport::Logger class ActiveSupport::Logger ============================ Parent: Logger Included modules: [ActiveSupport::LoggerSilence](loggersilence) logger\_outputs\_to?(logger, \*sources) Show source ``` # File activesupport/lib/active_support/logger.rb, line 16 def self.logger_outputs_to?(logger, *sources) logdev = logger.instance_variable_get(:@logdev) logger_source = logdev.dev if logdev.respond_to?(:dev) sources.any? { |source| source == logger_source } end ``` Returns true if the logger destination matches one of the sources ``` logger = Logger.new(STDOUT) ActiveSupport::Logger.logger_outputs_to?(logger, STDOUT) # => true ``` new(\*args, \*\*kwargs) Show source ``` # File activesupport/lib/active_support/logger.rb, line 80 def initialize(*args, **kwargs) super @formatter = SimpleFormatter.new end ``` Calls superclass method rails class ActiveSupport::ErrorReporter class ActiveSupport::ErrorReporter =================================== Parent: [Object](../object) `ActiveSupport::ErrorReporter` is a common interface for error reporting services. To rescue and report any unhandled error, you can use the `handle` method: ``` Rails.error.handle do do_something! end ``` If an error is raised, it will be reported and swallowed. Alternatively if you want to report the error but not swallow it, you can use `record` ``` Rails.error.record do do_something! end ``` Both methods can be restricted to only handle a specific exception class ``` maybe_tags = Rails.error.handle(Redis::BaseError) { redis.get("tags") } ``` You can also pass some extra context information that may be used by the error subscribers: ``` Rails.error.handle(context: { section: "admin" }) do # ... end ``` Additionally a `severity` can be passed along to communicate how important the error report is. `severity` can be one of `:error`, `:warning` or `:info`. Handled errors default to the `:warning` severity, and unhandled ones to `error`. Both `handle` and `record` pass through the return value from the block. In the case of `handle` rescuing an error, a fallback can be provided. The fallback must be a callable whose result will be returned when the block raises and is handled: ``` user = Rails.error.handle(fallback: -> { User.anonymous }) do User.find_by(params) end ``` SEVERITIES logger[RW] new(\*subscribers, logger: nil) Show source ``` # File activesupport/lib/active_support/error_reporter.rb, line 46 def initialize(*subscribers, logger: nil) @subscribers = subscribers.flatten @logger = logger end ``` handle(error\_class = StandardError, severity: :warning, context: {}, fallback: nil) { || ... } Show source ``` # File activesupport/lib/active_support/error_reporter.rb, line 57 def handle(error_class = StandardError, severity: :warning, context: {}, fallback: nil) yield rescue error_class => error report(error, handled: true, severity: severity, context: context) fallback.call if fallback end ``` Report any unhandled exception, and swallow it. ``` Rails.error.handle do 1 + '1' end ``` record(error\_class = StandardError, severity: :error, context: {}) { || ... } Show source ``` # File activesupport/lib/active_support/error_reporter.rb, line 64 def record(error_class = StandardError, severity: :error, context: {}) yield rescue error_class => error report(error, handled: false, severity: severity, context: context) raise end ``` report(error, handled:, severity: handled ? :warning : :error, context: {}) Show source ``` # File activesupport/lib/active_support/error_reporter.rb, line 95 def report(error, handled:, severity: handled ? :warning : :error, context: {}) unless SEVERITIES.include?(severity) raise ArgumentError, "severity must be one of #{SEVERITIES.map(&:inspect).join(", ")}, got: #{severity.inspect}" end full_context = ActiveSupport::ExecutionContext.to_h.merge(context) @subscribers.each do |subscriber| subscriber.report(error, handled: handled, severity: severity, context: full_context) rescue => subscriber_error if logger logger.fatal( "Error subscriber raised an error: #{subscriber_error.message} (#{subscriber_error.class})\n" + subscriber_error.backtrace.join("\n") ) else raise end end nil end ``` When the block based `handle` and `record` methods are not suitable, you can directly use `report` ``` Rails.error.report(error, handled: true) ``` set\_context(...) Show source ``` # File activesupport/lib/active_support/error_reporter.rb, line 88 def set_context(...) ActiveSupport::ExecutionContext.set(...) end ``` Update the execution context that is accessible to error subscribers ``` Rails.error.set_context(section: "checkout", user_id: @user.id) ``` See `ActiveSupport::ExecutionContext.set` subscribe(subscriber) Show source ``` # File activesupport/lib/active_support/error_reporter.rb, line 76 def subscribe(subscriber) unless subscriber.respond_to?(:report) raise ArgumentError, "Error subscribers must respond to #report" end @subscribers << subscriber end ``` Register a new error subscriber. The subscriber must respond to ``` report(Exception, handled: Boolean, context: Hash) ``` The `report` method `should` never raise an error. rails module ActiveSupport::Configurable module ActiveSupport::Configurable =================================== [`Configurable`](configurable) provides a `config` method to store and retrieve configuration options as an `OrderedOptions`. config() Show source ``` # File activesupport/lib/active_support/configurable.rb, line 145 def config @_config ||= self.class.config.inheritable_copy end ``` Reads and writes attributes from a configuration `OrderedOptions`. ``` require "active_support/configurable" class User include ActiveSupport::Configurable end user = User.new user.config.allowed_access = true user.config.level = 1 user.config.allowed_access # => true user.config.level # => 1 ``` rails module ActiveSupport::Cache module ActiveSupport::Cache ============================ See [`ActiveSupport::Cache::Store`](cache/store) for documentation. DEFAULT\_COMPRESS\_LIMIT OPTION\_ALIASES Mapping of canonical option names to aliases that a store will recognize. UNIVERSAL\_OPTIONS These options mean something to all cache implementations. Individual cache implementations may support additional options. format\_version[RW] expand\_cache\_key(key, namespace = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 100 def expand_cache_key(key, namespace = nil) expanded_cache_key = namespace ? +"#{namespace}/" : +"" if prefix = ENV["RAILS_CACHE_ID"] || ENV["RAILS_APP_VERSION"] expanded_cache_key << "#{prefix}/" end expanded_cache_key << retrieve_cache_key(key) expanded_cache_key end ``` Expands out the `key` argument into a key that can be used for the cache store. Optionally accepts a namespace, and all keys will be scoped within that namespace. If the `key` argument provided is an array, or responds to `to_a`, then each of elements in the array will be turned into parameters/keys and concatenated into a single key. For example: ``` ActiveSupport::Cache.expand_cache_key([:foo, :bar]) # => "foo/bar" ActiveSupport::Cache.expand_cache_key([:foo, :bar], "namespace") # => "namespace/foo/bar" ``` The `key` argument can also respond to `cache_key` or `to_param`. lookup\_store(store = nil, \*parameters) Show source ``` # File activesupport/lib/active_support/cache.rb, line 68 def lookup_store(store = nil, *parameters) case store when Symbol options = parameters.extract_options! # clean this up once Ruby 2.7 support is dropped # see https://github.com/rails/rails/pull/41522#discussion_r581186602 if options.empty? retrieve_store_class(store).new(*parameters) else retrieve_store_class(store).new(*parameters, **options) end when Array lookup_store(*store) when nil ActiveSupport::Cache::MemoryStore.new else store end end ``` Creates a new [`Store`](cache/store) object according to the given options. If no arguments are passed to this method, then a new [`ActiveSupport::Cache::MemoryStore`](cache/memorystore) object will be returned. If you pass a `Symbol` as the first argument, then a corresponding cache store class under the [`ActiveSupport::Cache`](cache) namespace will be created. For example: ``` ActiveSupport::Cache.lookup_store(:memory_store) # => returns a new ActiveSupport::Cache::MemoryStore object ActiveSupport::Cache.lookup_store(:mem_cache_store) # => returns a new ActiveSupport::Cache::MemCacheStore object ``` Any additional arguments will be passed to the corresponding cache store class's constructor: ``` ActiveSupport::Cache.lookup_store(:file_store, '/tmp/cache') # => same as: ActiveSupport::Cache::FileStore.new('/tmp/cache') ``` If the first argument is not a `Symbol`, then it will simply be returned: ``` ActiveSupport::Cache.lookup_store(MyOwnCacheStore.new) # => returns MyOwnCacheStore.new ``` rails module ActiveSupport::CompareWithRange module ActiveSupport::CompareWithRange ======================================= ===(value) Show source ``` # File activesupport/lib/active_support/core_ext/range/compare_range.rb, line 16 def ===(value) if value.is_a?(::Range) is_backwards_op = value.exclude_end? ? :>= : :> return false if value.begin && value.end && value.begin.public_send(is_backwards_op, value.end) # 1...10 includes 1..9 but it does not include 1..10. # 1..10 includes 1...11 but it does not include 1...12. operator = exclude_end? && !value.exclude_end? ? :< : :<= value_max = !exclude_end? && value.exclude_end? ? value.max : value.last super(value.first) && (self.end.nil? || value_max.public_send(operator, last)) else super end end ``` Extends the default Range#=== to support range comparisons. ``` (1..5) === (1..5) # => true (1..5) === (2..3) # => true (1..5) === (1...6) # => true (1..5) === (2..6) # => false ``` The native Range#=== behavior is untouched. ``` ('a'..'f') === ('c') # => true (5..9) === (11) # => false ``` The given range must be fully bounded, with both start and end. Calls superclass method include?(value) Show source ``` # File activesupport/lib/active_support/core_ext/range/compare_range.rb, line 41 def include?(value) if value.is_a?(::Range) is_backwards_op = value.exclude_end? ? :>= : :> return false if value.begin && value.end && value.begin.public_send(is_backwards_op, value.end) # 1...10 includes 1..9 but it does not include 1..10. # 1..10 includes 1...11 but it does not include 1...12. operator = exclude_end? && !value.exclude_end? ? :< : :<= value_max = !exclude_end? && value.exclude_end? ? value.max : value.last super(value.first) && (self.end.nil? || value_max.public_send(operator, last)) else super end end ``` Extends the default Range#include? to support range comparisons. ``` (1..5).include?(1..5) # => true (1..5).include?(2..3) # => true (1..5).include?(1...6) # => true (1..5).include?(2..6) # => false ``` The native Range#include? behavior is untouched. ``` ('a'..'f').include?('c') # => true (5..9).include?(11) # => false ``` The given range must be fully bounded, with both start and end. Calls superclass method rails class ActiveSupport::TimeZone class ActiveSupport::TimeZone ============================== Parent: [Object](../object) Included modules: The [`TimeZone`](timezone) class serves as a wrapper around TZInfo::Timezone instances. It allows us to do the following: * Limit the set of zones provided by TZInfo to a meaningful subset of 134 zones. * Retrieve and display zones with a friendlier name (e.g., “Eastern [`Time`](../time) (US & Canada)” instead of “America/New\_York”). * Lazily load TZInfo::Timezone instances only when they're needed. * Create [`ActiveSupport::TimeWithZone`](timewithzone) instances via TimeZone's `local`, `parse`, `at` and `now` methods. If you set `config.time_zone` in the Rails Application, you can access this [`TimeZone`](timezone) object via `Time.zone`: ``` # application.rb: class Application < Rails::Application config.time_zone = 'Eastern Time (US & Canada)' end Time.zone # => #<ActiveSupport::TimeZone:0x514834...> Time.zone.name # => "Eastern Time (US & Canada)" Time.zone.now # => Sun, 18 May 2008 14:30:44 EDT -04:00 ``` MAPPING Keys are Rails [`TimeZone`](timezone) names, values are TZInfo identifiers. name[R] tzinfo[R] [](arg) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 230 def [](arg) case arg when self arg when String begin @lazy_zones_map[arg] ||= create(arg) rescue TZInfo::InvalidTimezoneIdentifier nil end when TZInfo::Timezone @lazy_zones_map[arg.name] ||= create(arg.name, nil, arg) when Numeric, ActiveSupport::Duration arg *= 3600 if arg.abs <= 13 all.find { |z| z.utc_offset == arg.to_i } else raise ArgumentError, "invalid argument to TimeZone[]: #{arg.inspect}" end end ``` Locate a specific time zone object. If the argument is a string, it is interpreted to mean the name of the timezone to locate. If it is a numeric value it is either the hour offset, or the second offset, of the timezone to find. (The first one with that offset will be returned.) Returns `nil` if no such time zone is known to the system. all() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 221 def all @zones ||= zones_map.values.sort end ``` Returns an array of all [`TimeZone`](timezone) objects. There are multiple [`TimeZone`](timezone) objects per time zone, in many cases, to make it easier for users to find their own time zone. country\_zones(country\_code) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 258 def country_zones(country_code) code = country_code.to_s.upcase @country_zones[code] ||= load_country_zones(code) end ``` A convenience method for returning a collection of [`TimeZone`](timezone) objects for time zones in the country specified by its ISO 3166-1 Alpha2 code. create(name) Alias for: [new](timezone#method-c-new) find\_tzinfo(name) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 205 def find_tzinfo(name) TZInfo::Timezone.get(MAPPING[name] || name) end ``` new(name) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 214 def new(name) self[name] end ``` Returns a [`TimeZone`](timezone) instance with the given name, or `nil` if no such [`TimeZone`](timezone) instance exists. (This exists to support the use of this class with the `composed_of` macro.) Also aliased as: [create](timezone#method-c-create) new(name, utc\_offset = nil, tzinfo = nil) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 301 def initialize(name, utc_offset = nil, tzinfo = nil) @name = name @utc_offset = utc_offset @tzinfo = tzinfo || TimeZone.find_tzinfo(name) end ``` Create a new [`TimeZone`](timezone) object with the given name and offset. The offset is the number of seconds that this time zone is offset from UTC (GMT). Seconds were chosen as the offset unit because that is the unit that Ruby uses to represent time zone offsets (see Time#utc\_offset). seconds\_to\_utc\_offset(seconds, colon = true) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 197 def seconds_to_utc_offset(seconds, colon = true) format = colon ? UTC_OFFSET_WITH_COLON : UTC_OFFSET_WITHOUT_COLON sign = (seconds < 0 ? "-" : "+") hours = seconds.abs / 3600 minutes = (seconds.abs % 3600) / 60 format % [sign, hours, minutes] end ``` Assumes self represents an offset from UTC in seconds (as returned from Time#utc\_offset) and turns this into an +HH:MM formatted string. ``` ActiveSupport::TimeZone.seconds_to_utc_offset(-21_600) # => "-06:00" ``` us\_zones() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 252 def us_zones country_zones(:us) end ``` A convenience method for returning a collection of [`TimeZone`](timezone) objects for time zones in the USA. <=>(zone) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 324 def <=>(zone) return unless zone.respond_to? :utc_offset result = (utc_offset <=> zone.utc_offset) result = (name <=> zone.name) if result == 0 result end ``` Compare this time zone to the parameter. The two are compared first on their offsets, and then by name. =~(re) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 333 def =~(re) re === name || re === MAPPING[name] end ``` Compare [`name`](timezone#attribute-i-name) and TZInfo identifier to a supplied regexp, returning `true` if a match is found. at(\*args) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 370 def at(*args) Time.at(*args).utc.in_time_zone(self) end ``` [`Method`](../method) for creating new [`ActiveSupport::TimeWithZone`](timewithzone) instance in time zone of `self` from number of seconds since the Unix epoch. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.utc(2000).to_f # => 946684800.0 Time.zone.at(946684800.0) # => Fri, 31 Dec 1999 14:00:00 HST -10:00 ``` A second argument can be supplied to specify sub-second precision. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.at(946684800, 123456.789).nsec # => 123456789 ``` formatted\_offset(colon = true, alternate\_utc\_string = nil) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 318 def formatted_offset(colon = true, alternate_utc_string = nil) utc_offset == 0 && alternate_utc_string || self.class.seconds_to_utc_offset(utc_offset, colon) end ``` Returns a formatted string of the offset from UTC, or an alternative string if the time zone is already UTC. ``` zone = ActiveSupport::TimeZone['Central Time (US & Canada)'] zone.formatted_offset # => "-06:00" zone.formatted_offset(false) # => "-0600" ``` iso8601(str) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 387 def iso8601(str) # Historically `Date._iso8601(nil)` returns `{}`, but in the `date` gem versions `3.2.1`, `3.1.2`, `3.0.2`, # and `2.0.1`, `Date._iso8601(nil)` raises `TypeError` https://github.com/ruby/date/issues/39 # Future `date` releases are expected to revert back to the original behavior. raise ArgumentError, "invalid date" if str.nil? parts = Date._iso8601(str) year = parts.fetch(:year) if parts.key?(:yday) ordinal_date = Date.ordinal(year, parts.fetch(:yday)) month = ordinal_date.month day = ordinal_date.day else month = parts.fetch(:mon) day = parts.fetch(:mday) end time = Time.new( year, month, day, parts.fetch(:hour, 0), parts.fetch(:min, 0), parts.fetch(:sec, 0) + parts.fetch(:sec_fraction, 0), parts.fetch(:offset, 0) ) if parts[:offset] TimeWithZone.new(time.utc, self) else TimeWithZone.new(nil, self, time) end rescue Date::Error, KeyError raise ArgumentError, "invalid date" end ``` [`Method`](../method) for creating new [`ActiveSupport::TimeWithZone`](timewithzone) instance in time zone of `self` from an ISO 8601 string. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.iso8601('1999-12-31T14:00:00') # => Fri, 31 Dec 1999 14:00:00 HST -10:00 ``` If the time components are missing then they will be set to zero. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.iso8601('1999-12-31') # => Fri, 31 Dec 1999 00:00:00 HST -10:00 ``` If the string is invalid then an `ArgumentError` will be raised unlike `parse` which usually returns `nil` when given an invalid date string. local(\*args) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 354 def local(*args) time = Time.utc(*args) ActiveSupport::TimeWithZone.new(nil, self, time) end ``` [`Method`](../method) for creating new [`ActiveSupport::TimeWithZone`](timewithzone) instance in time zone of `self` from given values. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.local(2007, 2, 1, 15, 30, 45) # => Thu, 01 Feb 2007 15:30:45 HST -10:00 ``` local\_to\_utc(time, dst = true) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 542 def local_to_utc(time, dst = true) tzinfo.local_to_utc(time, dst) end ``` Adjust the given time to the simultaneous time in UTC. Returns a Time.utc() instance. match?(re) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 339 def match?(re) (re == name) || (re == MAPPING[name]) || ((Regexp === re) && (re.match?(name) || re.match?(MAPPING[name]))) end ``` Compare [`name`](timezone#attribute-i-name) and TZInfo identifier to a supplied regexp, returning `true` if a match is found. now() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 507 def now time_now.utc.in_time_zone(self) end ``` Returns an [`ActiveSupport::TimeWithZone`](timewithzone) instance representing the current time in the time zone represented by `self`. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.now # => Wed, 23 Jan 2008 20:24:27 HST -10:00 ``` parse(str, now = now()) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 444 def parse(str, now = now()) parts_to_time(Date._parse(str, false), now) end ``` [`Method`](../method) for creating new [`ActiveSupport::TimeWithZone`](timewithzone) instance in time zone of `self` from parsed string. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.parse('1999-12-31 14:00:00') # => Fri, 31 Dec 1999 14:00:00 HST -10:00 ``` If upper components are missing from the string, they are supplied from [`TimeZone#now`](timezone#method-i-now): ``` Time.zone.now # => Fri, 31 Dec 1999 14:00:00 HST -10:00 Time.zone.parse('22:30:00') # => Fri, 31 Dec 1999 22:30:00 HST -10:00 ``` However, if the date component is not provided, but any other upper components are supplied, then the day of the month defaults to 1: ``` Time.zone.parse('Mar 2000') # => Wed, 01 Mar 2000 00:00:00 HST -10:00 ``` If the string is invalid then an `ArgumentError` could be raised. period\_for\_local(time, dst = true) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 554 def period_for_local(time, dst = true) tzinfo.period_for_local(time, dst) { |periods| periods.last } end ``` Available so that [`TimeZone`](timezone) instances respond like TZInfo::Timezone instances. period\_for\_utc(time) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 548 def period_for_utc(time) tzinfo.period_for_utc(time) end ``` Available so that [`TimeZone`](timezone) instances respond like TZInfo::Timezone instances. rfc3339(str) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 460 def rfc3339(str) parts = Date._rfc3339(str) raise ArgumentError, "invalid date" if parts.empty? time = Time.new( parts.fetch(:year), parts.fetch(:mon), parts.fetch(:mday), parts.fetch(:hour), parts.fetch(:min), parts.fetch(:sec) + parts.fetch(:sec_fraction, 0), parts.fetch(:offset) ) TimeWithZone.new(time.utc, self) end ``` [`Method`](../method) for creating new [`ActiveSupport::TimeWithZone`](timewithzone) instance in time zone of `self` from an RFC 3339 string. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.rfc3339('2000-01-01T00:00:00Z') # => Fri, 31 Dec 1999 14:00:00 HST -10:00 ``` If the time or zone components are missing then an `ArgumentError` will be raised. This is much stricter than either `parse` or `iso8601` which allow for missing components. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.rfc3339('1999-12-31') # => ArgumentError: invalid date ``` strptime(str, format, now = now()) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 498 def strptime(str, format, now = now()) parts_to_time(DateTime._strptime(str, format), now) end ``` Parses `str` according to `format` and returns an [`ActiveSupport::TimeWithZone`](timewithzone). Assumes that `str` is a time in the time zone `self`, unless `format` includes an explicit time zone. (This is the same behavior as `parse`.) In either case, the returned [`TimeWithZone`](timewithzone) has the timezone of `self`. ``` Time.zone = 'Hawaii' # => "Hawaii" Time.zone.strptime('1999-12-31 14:00:00', '%Y-%m-%d %H:%M:%S') # => Fri, 31 Dec 1999 14:00:00 HST -10:00 ``` If upper components are missing from the string, they are supplied from [`TimeZone#now`](timezone#method-i-now): ``` Time.zone.now # => Fri, 31 Dec 1999 14:00:00 HST -10:00 Time.zone.strptime('22:30:00', '%H:%M:%S') # => Fri, 31 Dec 1999 22:30:00 HST -10:00 ``` However, if the date component is not provided, but any other upper components are supplied, then the day of the month defaults to 1: ``` Time.zone.strptime('Mar 2000', '%b %Y') # => Wed, 01 Mar 2000 00:00:00 HST -10:00 ``` to\_s() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 345 def to_s "(GMT#{formatted_offset}) #{name}" end ``` Returns a textual representation of this time zone. today() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 512 def today tzinfo.now.to_date end ``` Returns the current date in this time zone. tomorrow() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 517 def tomorrow today + 1 end ``` Returns the next date in this time zone. utc\_offset() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 308 def utc_offset @utc_offset || tzinfo&.current_period&.base_utc_offset end ``` Returns the offset of this time zone from UTC in seconds. utc\_to\_local(time) Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 533 def utc_to_local(time) tzinfo.utc_to_local(time).yield_self do |t| ActiveSupport.utc_to_local_returns_utc_offset_times ? t : Time.utc(t.year, t.month, t.day, t.hour, t.min, t.sec, t.sec_fraction * 1_000_000) end end ``` Adjust the given time to the simultaneous time in the time zone represented by `self`. Returns a local time with the appropriate offset – if you want an [`ActiveSupport::TimeWithZone`](timewithzone) instance, use [`Time#in_time_zone()`](../dateandtime/zones#method-i-in_time_zone) instead. As of tzinfo 2, [`utc_to_local`](timezone#method-i-utc_to_local) returns a [`Time`](../time) with a non-zero utc\_offset. See the `utc_to_local_returns_utc_offset_times` config for more info. yesterday() Show source ``` # File activesupport/lib/active_support/values/time_zone.rb, line 522 def yesterday today - 1 end ``` Returns the previous date in this time zone.
programming_docs
rails class ActiveSupport::InheritableOptions class ActiveSupport::InheritableOptions ======================================== Parent: [ActiveSupport::OrderedOptions](orderedoptions) `InheritableOptions` provides a constructor to build an `OrderedOptions` hash inherited from another hash. Use this if you already have some hash and you want to create a new one based on it. ``` h = ActiveSupport::InheritableOptions.new({ girl: 'Mary', boy: 'John' }) h.girl # => 'Mary' h.boy # => 'John' ``` new(parent = nil) Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 80 def initialize(parent = nil) if parent.kind_of?(OrderedOptions) # use the faster _get when dealing with OrderedOptions super() { |h, k| parent._get(k) } elsif parent super() { |h, k| parent[k] } else super() end end ``` Calls superclass method inheritable\_copy() Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 91 def inheritable_copy self.class.new(self) end ``` rails module ActiveSupport::SecurityUtils module ActiveSupport::SecurityUtils ==================================== fixed\_length\_secure\_compare(a, b) Show source ``` # File activesupport/lib/active_support/security_utils.rb, line 11 def fixed_length_secure_compare(a, b) OpenSSL.fixed_length_secure_compare(a, b) end ``` secure\_compare(a, b) Show source ``` # File activesupport/lib/active_support/security_utils.rb, line 33 def secure_compare(a, b) a.bytesize == b.bytesize && fixed_length_secure_compare(a, b) end ``` Secure string comparison for strings of variable length. While a timing attack would not be able to discern the content of a secret compared via [`secure_compare`](securityutils#method-c-secure_compare), it is possible to determine the secret length. This should be considered when using [`secure_compare`](securityutils#method-c-secure_compare) to compare weak, short secrets to user input. rails module ActiveSupport::LoggerSilence module ActiveSupport::LoggerSilence ==================================== silence(severity = Logger::ERROR) { |self| ... } Show source ``` # File activesupport/lib/active_support/logger_silence.rb, line 17 def silence(severity = Logger::ERROR) silencer ? log_at(severity) { yield self } : yield(self) end ``` Silences the logger for the duration of the block. rails class ActiveSupport::StringInquirer class ActiveSupport::StringInquirer ==================================== Parent: [String](../string) Wrapping a string in this class gives you a prettier way to test for equality. The value returned by `Rails.env` is wrapped in a [`StringInquirer`](stringinquirer) object, so instead of calling this: ``` Rails.env == 'production' ``` you can call this: ``` Rails.env.production? ``` Instantiating a new [`StringInquirer`](stringinquirer) ------------------------------------------------------ ``` vehicle = ActiveSupport::StringInquirer.new('car') vehicle.car? # => true vehicle.bike? # => false ``` rails class ActiveSupport::ArrayInquirer class ActiveSupport::ArrayInquirer =================================== Parent: [Array](../array) Wrapping an array in an `ArrayInquirer` gives a friendlier way to check its string-like contents: ``` variants = ActiveSupport::ArrayInquirer.new([:phone, :tablet]) variants.phone? # => true variants.tablet? # => true variants.desktop? # => false ``` any?(\*candidates) Show source ``` # File activesupport/lib/active_support/array_inquirer.rb, line 25 def any?(*candidates) if candidates.none? super else candidates.any? do |candidate| include?(candidate.to_sym) || include?(candidate.to_s) end end end ``` Passes each element of `candidates` collection to [`ArrayInquirer`](arrayinquirer) collection. The method returns true if any element from the [`ArrayInquirer`](arrayinquirer) collection is equal to the stringified or symbolized form of any element in the `candidates` collection. If `candidates` collection is not given, method returns true. ``` variants = ActiveSupport::ArrayInquirer.new([:phone, :tablet]) variants.any? # => true variants.any?(:phone, :tablet) # => true variants.any?('phone', 'desktop') # => true variants.any?(:desktop, :watch) # => false ``` Calls superclass method rails module ActiveSupport::DescendantsTracker module ActiveSupport::DescendantsTracker ========================================= This module provides an internal implementation to track descendants which is faster than iterating through ObjectSpace. descendants(klass) Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 62 def descendants(klass) klass.descendants end ``` direct\_descendants(klass) Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 11 def direct_descendants(klass) ActiveSupport::Deprecation.warn(<<~MSG) ActiveSupport::DescendantsTracker.direct_descendants is deprecated and will be removed in Rails 7.1. Use ActiveSupport::DescendantsTracker.subclasses instead. MSG subclasses(klass) end ``` store\_inherited(klass, descendant) Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 138 def store_inherited(klass, descendant) (@@direct_descendants[klass] ||= DescendantsArray.new) << descendant end ``` This is the only method that is not thread safe, but is only ever called during the eager loading phase. subclasses(klass) Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 58 def subclasses(klass) klass.subclasses end ``` descendants() Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 88 def descendants subclasses.concat(subclasses.flat_map(&:descendants)) end ``` direct\_descendants() Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 92 def direct_descendants ActiveSupport::Deprecation.warn(<<~MSG) ActiveSupport::DescendantsTracker#direct_descendants is deprecated and will be removed in Rails 7.1. Use #subclasses instead. MSG subclasses end ``` inherited(base) Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 153 def inherited(base) DescendantsTracker.store_inherited(self, base) super end ``` Calls superclass method subclasses() Show source ``` # File activesupport/lib/active_support/descendants_tracker.rb, line 82 def subclasses subclasses = super subclasses.reject! { |d| @@excluded_descendants[d] } subclasses end ``` Calls superclass method rails class ActiveSupport::FileUpdateChecker class ActiveSupport::FileUpdateChecker ======================================= Parent: [Object](../object) [`FileUpdateChecker`](fileupdatechecker) specifies the API used by Rails to watch files and control reloading. The API depends on four methods: * `initialize` which expects two parameters and one block as described below. * `updated?` which returns a boolean if there were updates in the filesystem or not. * `execute` which executes the given block on initialization and updates the latest watched files and timestamp. * `execute_if_updated` which just executes the block if it was updated. After initialization, a call to `execute_if_updated` must execute the block only if there was really a change in the filesystem. This class is used by Rails to reload the I18n framework whenever they are changed upon a new request. ``` i18n_reloader = ActiveSupport::FileUpdateChecker.new(paths) do I18n.reload! end ActiveSupport::Reloader.to_prepare do i18n_reloader.execute_if_updated end ``` new(files, dirs = {}, &block) Show source ``` # File activesupport/lib/active_support/file_update_checker.rb, line 42 def initialize(files, dirs = {}, &block) unless block raise ArgumentError, "A block is required to initialize a FileUpdateChecker" end @files = files.freeze @glob = compile_glob(dirs) @block = block @watched = nil @updated_at = nil @last_watched = watched @last_update_at = updated_at(@last_watched) end ``` It accepts two parameters on initialization. The first is an array of files and the second is an optional hash of directories. The hash must have directories as keys and the value is an array of extensions to be watched under that directory. This method must also receive a block that will be called once a path changes. The array of files and list of directories cannot be changed after [`FileUpdateChecker`](fileupdatechecker) has been initialized. execute() Show source ``` # File activesupport/lib/active_support/file_update_checker.rb, line 80 def execute @last_watched = watched @last_update_at = updated_at(@last_watched) @block.call ensure @watched = nil @updated_at = nil end ``` Executes the given block and updates the latest watched files and timestamp. execute\_if\_updated() { || ... } Show source ``` # File activesupport/lib/active_support/file_update_checker.rb, line 90 def execute_if_updated if updated? yield if block_given? execute true else false end end ``` Execute the block given if updated. updated?() Show source ``` # File activesupport/lib/active_support/file_update_checker.rb, line 61 def updated? current_watched = watched if @last_watched.size != current_watched.size @watched = current_watched true else current_updated_at = updated_at(current_watched) if @last_update_at < current_updated_at @watched = current_watched @updated_at = current_updated_at true else false end end end ``` Check if any of the entries were updated. If so, the watched and/or updated\_at values are cached until the block is executed via `execute` or `execute_if_updated`. rails class ActiveSupport::EncryptedConfiguration class ActiveSupport::EncryptedConfiguration ============================================ Parent: EncryptedFile new(config\_path:, key\_path:, env\_key:, raise\_if\_missing\_key:) Show source ``` # File activesupport/lib/active_support/encrypted_configuration.rb, line 14 def initialize(config_path:, key_path:, env_key:, raise_if_missing_key:) super content_path: config_path, key_path: key_path, env_key: env_key, raise_if_missing_key: raise_if_missing_key end ``` Calls superclass method config() Show source ``` # File activesupport/lib/active_support/encrypted_configuration.rb, line 32 def config @config ||= deserialize(read).deep_symbolize_keys end ``` read() Show source ``` # File activesupport/lib/active_support/encrypted_configuration.rb, line 20 def read super rescue ActiveSupport::EncryptedFile::MissingContentError "" end ``` Allow a config to be started without a file present Calls superclass method write(contents) Show source ``` # File activesupport/lib/active_support/encrypted_configuration.rb, line 26 def write(contents) deserialize(contents) super end ``` Calls superclass method rails module ActiveSupport::Rescuable module ActiveSupport::Rescuable ================================ [`Rescuable`](rescuable) module adds support for easier exception handling. rescue\_with\_handler(exception) Show source ``` # File activesupport/lib/active_support/rescuable.rb, line 164 def rescue_with_handler(exception) self.class.rescue_with_handler exception, object: self end ``` Delegates to the class method, but uses the instance as the subject for rescue\_from handlers (method calls, instance\_exec blocks). rails module ActiveSupport::PerThreadRegistry module ActiveSupport::PerThreadRegistry ======================================== NOTE: This approach has been deprecated for end-user code in favor of [thread\_mattr\_accessor](../module#method-i-thread_mattr_accessor) and friends. Please use that approach instead. This module is used to encapsulate access to thread local variables. Instead of polluting the thread locals namespace: ``` Thread.current[:connection_handler] ``` you define a class that extends this module: ``` module ActiveRecord class RuntimeRegistry extend ActiveSupport::PerThreadRegistry attr_accessor :connection_handler end end ``` and invoke the declared instance accessors as class methods. So ``` ActiveRecord::RuntimeRegistry.connection_handler = connection_handler ``` sets a connection handler local to the current thread, and ``` ActiveRecord::RuntimeRegistry.connection_handler ``` returns a connection handler local to the current thread. This feature is accomplished by instantiating the class and storing the instance as a thread local keyed by the class name. In the example above a key “ActiveRecord::RuntimeRegistry” is stored in `Thread.current`. The class methods proxy to said thread local instance. If the class has an initializer, it must accept no arguments. extended(object) Show source ``` # File activesupport/lib/active_support/per_thread_registry.rb, line 42 def self.extended(object) ActiveSupport::Deprecation.warn(<<~MSG) ActiveSupport::PerThreadRegistry is deprecated and will be removed in Rails 7.1. Use `Module#thread_mattr_accessor` instead. MSG object.instance_variable_set :@per_thread_registry_key, object.name.freeze end ``` instance() Show source ``` # File activesupport/lib/active_support/per_thread_registry.rb, line 50 def instance Thread.current[@per_thread_registry_key] ||= new end ``` rails class ActiveSupport::DeprecationException class ActiveSupport::DeprecationException ========================================== Parent: StandardError Raised when `ActiveSupport::Deprecation::Behavior#behavior` is set with `:raise`. You would set `:raise`, as a behavior to raise errors and proactively report exceptions from deprecations. rails class ActiveSupport::CurrentAttributes class ActiveSupport::CurrentAttributes ======================================= Parent: [Object](../object) Included modules: [ActiveSupport::Callbacks](callbacks) Abstract super class that provides a thread-isolated attributes singleton, which resets automatically before and after each request. This allows you to keep all the per-request attributes easily available to the whole system. The following full app-like example demonstrates how to use a Current class to facilitate easy access to the global, per-request attributes without passing them deeply around everywhere: ``` # app/models/current.rb class Current < ActiveSupport::CurrentAttributes attribute :account, :user attribute :request_id, :user_agent, :ip_address resets { Time.zone = nil } def user=(user) super self.account = user.account Time.zone = user.time_zone end end # app/controllers/concerns/authentication.rb module Authentication extend ActiveSupport::Concern included do before_action :authenticate end private def authenticate if authenticated_user = User.find_by(id: cookies.encrypted[:user_id]) Current.user = authenticated_user else redirect_to new_session_url end end end # app/controllers/concerns/set_current_request_details.rb module SetCurrentRequestDetails extend ActiveSupport::Concern included do before_action do Current.request_id = request.uuid Current.user_agent = request.user_agent Current.ip_address = request.ip end end end class ApplicationController < ActionController::Base include Authentication include SetCurrentRequestDetails end class MessagesController < ApplicationController def create Current.account.messages.create(message_params) end end class Message < ApplicationRecord belongs_to :creator, default: -> { Current.user } after_create { |message| Event.create(record: message) } end class Event < ApplicationRecord before_create do self.request_id = Current.request_id self.user_agent = Current.user_agent self.ip_address = Current.ip_address end end ``` A word of caution: It's easy to overdo a global singleton like Current and tangle your model as a result. Current should only be used for a few, top-level globals, like account, user, and request details. The attributes stuck in Current should be used by more or less all actions on all requests. If you start sticking controller-specific attributes in there, you're going to create a mess. attributes[RW] after\_reset(&block) Alias for: [resets](currentattributes#method-c-resets) attribute(\*names) Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 100 def attribute(*names) ActiveSupport::CodeGenerator.batch(generated_attribute_methods, __FILE__, __LINE__) do |owner| names.each do |name| owner.define_cached_method(name, namespace: :current_attributes) do |batch| batch << "def #{name}" << "attributes[:#{name}]" << "end" end owner.define_cached_method("#{name}=", namespace: :current_attributes) do |batch| batch << "def #{name}=(value)" << "attributes[:#{name}] = value" << "end" end end end ActiveSupport::CodeGenerator.batch(singleton_class, __FILE__, __LINE__) do |owner| names.each do |name| owner.define_cached_method(name, namespace: :current_attributes_delegation) do |batch| batch << "def #{name}" << "instance.#{name}" << "end" end owner.define_cached_method("#{name}=", namespace: :current_attributes_delegation) do |batch| batch << "def #{name}=(value)" << "instance.#{name} = value" << "end" end end end end ``` Declares one or more attributes that will be given both class and instance accessor methods. before\_reset(&block) Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 137 def before_reset(&block) set_callback :reset, :before, &block end ``` Calls this block before [`reset`](currentattributes#method-i-reset) is called on the instance. Used for resetting external collaborators that depend on current values. instance() Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 95 def instance current_instances[current_instances_key] ||= new end ``` Returns singleton instance for this class in this thread. If none exists, one is created. new() Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 188 def initialize @attributes = {} end ``` resets(&block) Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 142 def resets(&block) set_callback :reset, :after, &block end ``` Calls this block after [`reset`](currentattributes#method-i-reset) is called on the instance. Used for resetting external collaborators, like [`Time.zone`](../time#method-c-zone). Also aliased as: [after\_reset](currentattributes#method-c-after_reset) reset() Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 211 def reset run_callbacks :reset do self.attributes = {} end end ``` Reset all attributes. Should be called before and after actions, when used as a per-request singleton. set(set\_attributes) { || ... } Show source ``` # File activesupport/lib/active_support/current_attributes.rb, line 202 def set(set_attributes) old_attributes = compute_attributes(set_attributes.keys) assign_attributes(set_attributes) yield ensure assign_attributes(old_attributes) end ``` Expose one or more attributes within a block. Old values are returned after the block concludes. Example demonstrating the common use of needing to set Current attributes outside the request-cycle: ``` class Chat::PublicationJob < ApplicationJob def perform(attributes, room_number, creator) Current.set(person: creator) do Chat::Publisher.publish(attributes: attributes, room_number: room_number) end end end ```
programming_docs
rails class ActiveSupport::MessageVerifier class ActiveSupport::MessageVerifier ===================================== Parent: [Object](../object) `MessageVerifier` makes it easy to generate and verify messages which are signed to prevent tampering. This is useful for cases like remember-me tokens and auto-unsubscribe links where the session store isn't suitable or available. Remember Me: ``` cookies[:remember_me] = @verifier.generate([@user.id, 2.weeks.from_now]) ``` In the authentication filter: ``` id, time = @verifier.verify(cookies[:remember_me]) if Time.now < time self.current_user = User.find(id) end ``` By default it uses Marshal to serialize the message. If you want to use another serialization method, you can set the serializer in the options hash upon initialization: ``` @verifier = ActiveSupport::MessageVerifier.new('s3Krit', serializer: YAML) ``` `MessageVerifier` creates HMAC signatures using SHA1 hash algorithm by default. If you want to use a different hash algorithm, you can change it by providing `:digest` key as an option while initializing the verifier: ``` @verifier = ActiveSupport::MessageVerifier.new('s3Krit', digest: 'SHA256') ``` ### Confining messages to a specific purpose By default any message can be used throughout your app. But they can also be confined to a specific `:purpose`. ``` token = @verifier.generate("this is the chair", purpose: :login) ``` Then that same purpose must be passed when verifying to get the data back out: ``` @verifier.verified(token, purpose: :login) # => "this is the chair" @verifier.verified(token, purpose: :shipping) # => nil @verifier.verified(token) # => nil @verifier.verify(token, purpose: :login) # => "this is the chair" @verifier.verify(token, purpose: :shipping) # => ActiveSupport::MessageVerifier::InvalidSignature @verifier.verify(token) # => ActiveSupport::MessageVerifier::InvalidSignature ``` Likewise, if a message has no purpose it won't be returned when verifying with a specific purpose. ``` token = @verifier.generate("the conversation is lively") @verifier.verified(token, purpose: :scare_tactics) # => nil @verifier.verified(token) # => "the conversation is lively" @verifier.verify(token, purpose: :scare_tactics) # => ActiveSupport::MessageVerifier::InvalidSignature @verifier.verify(token) # => "the conversation is lively" ``` ### Making messages expire By default messages last forever and verifying one year from now will still return the original value. But messages can be set to expire at a given time with `:expires_in` or `:expires_at`. ``` @verifier.generate("parcel", expires_in: 1.month) @verifier.generate("doowad", expires_at: Time.now.end_of_year) ``` Then the messages can be verified and returned up to the expire time. Thereafter, the `verified` method returns `nil` while `verify` raises `ActiveSupport::MessageVerifier::InvalidSignature`. ### Rotating keys [`MessageVerifier`](messageverifier) also supports rotating out old configurations by falling back to a stack of verifiers. Call `rotate` to build and add a verifier so either `verified` or `verify` will also try verifying with the fallback. By default any rotated verifiers use the values of the primary verifier unless specified otherwise. You'd give your verifier the new defaults: ``` verifier = ActiveSupport::MessageVerifier.new(@secret, digest: "SHA512", serializer: JSON) ``` Then gradually rotate the old values out by adding them as fallbacks. Any message generated with the old values will then work until the rotation is removed. ``` verifier.rotate old_secret # Fallback to an old secret instead of @secret. verifier.rotate digest: "SHA256" # Fallback to an old digest instead of SHA512. verifier.rotate serializer: Marshal # Fallback to an old serializer instead of JSON. ``` Though the above would most likely be combined into one rotation: ``` verifier.rotate old_secret, digest: "SHA256", serializer: Marshal ``` new(secret, digest: nil, serializer: nil) Show source ``` # File activesupport/lib/active_support/message_verifier.rb, line 110 def initialize(secret, digest: nil, serializer: nil) raise ArgumentError, "Secret should not be nil." unless secret @secret = secret @digest = digest&.to_s || "SHA1" @serializer = serializer || Marshal end ``` generate(value, expires\_at: nil, expires\_in: nil, purpose: nil) Show source ``` # File activesupport/lib/active_support/message_verifier.rb, line 188 def generate(value, expires_at: nil, expires_in: nil, purpose: nil) data = encode(Messages::Metadata.wrap(@serializer.dump(value), expires_at: expires_at, expires_in: expires_in, purpose: purpose)) "#{data}#{SEPARATOR}#{generate_digest(data)}" end ``` Generates a signed message for the provided value. The message is signed with the `MessageVerifier`'s secret. Returns Base64-encoded message joined with the generated signature. ``` verifier = ActiveSupport::MessageVerifier.new 's3Krit' verifier.generate 'a private message' # => "BAhJIhRwcml2YXRlLW1lc3NhZ2UGOgZFVA==--e2d724331ebdee96a10fb99b089508d1c72bd772" ``` valid\_message?(signed\_message) Show source ``` # File activesupport/lib/active_support/message_verifier.rb, line 126 def valid_message?(signed_message) data, digest = get_data_and_digest_from(signed_message) digest_matches_data?(digest, data) end ``` Checks if a signed message could have been generated by signing an object with the `MessageVerifier`'s secret. ``` verifier = ActiveSupport::MessageVerifier.new 's3Krit' signed_message = verifier.generate 'a private message' verifier.valid_message?(signed_message) # => true tampered_message = signed_message.chop # editing the message invalidates the signature verifier.valid_message?(tampered_message) # => false ``` verified(signed\_message, purpose: nil, \*\*) Show source ``` # File activesupport/lib/active_support/message_verifier.rb, line 152 def verified(signed_message, purpose: nil, **) data, digest = get_data_and_digest_from(signed_message) if digest_matches_data?(digest, data) begin message = Messages::Metadata.verify(decode(data), purpose) @serializer.load(message) if message rescue ArgumentError => argument_error return if argument_error.message.include?("invalid base64") raise end end end ``` Decodes the signed message using the `MessageVerifier`'s secret. ``` verifier = ActiveSupport::MessageVerifier.new 's3Krit' signed_message = verifier.generate 'a private message' verifier.verified(signed_message) # => 'a private message' ``` Returns `nil` if the message was not signed with the same secret. ``` other_verifier = ActiveSupport::MessageVerifier.new 'd1ff3r3nt-s3Krit' other_verifier.verified(signed_message) # => nil ``` Returns `nil` if the message is not Base64-encoded. ``` invalid_message = "f--46a0120593880c733a53b6dad75b42ddc1c8996d" verifier.verified(invalid_message) # => nil ``` Raises any error raised while decoding the signed message. ``` incompatible_message = "test--dad7b06c94abba8d46a15fafaef56c327665d5ff" verifier.verified(incompatible_message) # => TypeError: incompatible marshal file format ``` verify(\*args, \*\*options) Show source ``` # File activesupport/lib/active_support/message_verifier.rb, line 177 def verify(*args, **options) verified(*args, **options) || raise(InvalidSignature) end ``` Decodes the signed message using the `MessageVerifier`'s secret. ``` verifier = ActiveSupport::MessageVerifier.new 's3Krit' signed_message = verifier.generate 'a private message' verifier.verify(signed_message) # => 'a private message' ``` Raises `InvalidSignature` if the message was not signed with the same secret or was not Base64-encoded. ``` other_verifier = ActiveSupport::MessageVerifier.new 'd1ff3r3nt-s3Krit' other_verifier.verify(signed_message) # => ActiveSupport::MessageVerifier::InvalidSignature ``` rails class ActiveSupport::SecureCompareRotator class ActiveSupport::SecureCompareRotator ========================================== Parent: [Object](../object) Included modules: [ActiveSupport::SecurityUtils](securityutils) The [`ActiveSupport::SecureCompareRotator`](securecomparerotator) is a wrapper around `ActiveSupport::SecurityUtils.secure_compare` and allows you to rotate a previously defined value to a new one. It can be used as follow: ``` rotator = ActiveSupport::SecureCompareRotator.new('new_production_value') rotator.rotate('previous_production_value') rotator.secure_compare!('previous_production_value') ``` One real use case example would be to rotate a basic auth credentials: ``` class MyController < ApplicationController def authenticate_request rotator = ActiveSupport::SecureCompareRotator.new('new_password') rotator.rotate('old_password') authenticate_or_request_with_http_basic do |username, password| rotator.secure_compare!(password) rescue ActiveSupport::SecureCompareRotator::InvalidMatch false end end end ``` InvalidMatch new(value, \*\*\_options) Show source ``` # File activesupport/lib/active_support/secure_compare_rotator.rb, line 36 def initialize(value, **_options) @value = value end ``` secure\_compare!(other\_value, on\_rotation: @on\_rotation) Show source ``` # File activesupport/lib/active_support/secure_compare_rotator.rb, line 40 def secure_compare!(other_value, on_rotation: @on_rotation) secure_compare(@value, other_value) || run_rotations(on_rotation) { |wrapper| wrapper.secure_compare!(other_value) } || raise(InvalidMatch) end ``` rails class ActiveSupport::TimeWithZone class ActiveSupport::TimeWithZone ================================== Parent: [Object](../object) A Time-like class that can represent a time in any time zone. Necessary because standard Ruby [`Time`](../time) instances are limited to UTC and the system's `ENV['TZ']` zone. You shouldn't ever need to create a [`TimeWithZone`](timewithzone) instance directly via `new`. Instead use methods `local`, `parse`, `at` and `now` on [`TimeZone`](timezone) instances, and `in_time_zone` on [`Time`](../time) and [`DateTime`](../datetime) instances. ``` Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' Time.zone.local(2007, 2, 10, 15, 30, 45) # => Sat, 10 Feb 2007 15:30:45.000000000 EST -05:00 Time.zone.parse('2007-02-10 15:30:45') # => Sat, 10 Feb 2007 15:30:45.000000000 EST -05:00 Time.zone.at(1171139445) # => Sat, 10 Feb 2007 15:30:45.000000000 EST -05:00 Time.zone.now # => Sun, 18 May 2008 13:07:55.754107581 EDT -04:00 Time.utc(2007, 2, 10, 20, 30, 45).in_time_zone # => Sat, 10 Feb 2007 15:30:45.000000000 EST -05:00 ``` See [`Time`](../time) and [`TimeZone`](timezone) for further documentation of these methods. [`TimeWithZone`](timewithzone) instances implement the same API as Ruby [`Time`](../time) instances, so that [`Time`](../time) and [`TimeWithZone`](timewithzone) instances are interchangeable. ``` t = Time.zone.now # => Sun, 18 May 2008 13:27:25.031505668 EDT -04:00 t.hour # => 13 t.dst? # => true t.utc_offset # => -14400 t.zone # => "EDT" t.to_formatted_s(:rfc822) # => "Sun, 18 May 2008 13:27:25 -0400" t + 1.day # => Mon, 19 May 2008 13:27:25.031505668 EDT -04:00 t.beginning_of_year # => Tue, 01 Jan 2008 00:00:00.000000000 EST -05:00 t > Time.utc(1999) # => true t.is_a?(Time) # => true t.is_a?(ActiveSupport::TimeWithZone) # => true ``` PRECISIONS SECONDS\_PER\_DAY time\_zone[R] name() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 44 def self.name ActiveSupport::Deprecation.warn(<<~EOM) ActiveSupport::TimeWithZone.name has been deprecated and from Rails 7.1 will use the default Ruby implementation. You can set `config.active_support.remove_deprecated_time_with_zone_name = true` to enable the new behavior now. EOM "Time" end ``` Report class name as 'Time' to thwart type checking. new(utc\_time, time\_zone, local\_time = nil, period = nil) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 61 def initialize(utc_time, time_zone, local_time = nil, period = nil) @utc = utc_time ? transfer_time_values_to_utc_constructor(utc_time) : nil @time_zone, @time = time_zone, local_time @period = @utc ? period : get_period_and_ensure_valid_local_time(period) end ``` +(other) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 328 def +(other) if duration_of_variable_length?(other) method_missing(:+, other) else result = utc.acts_like?(:date) ? utc.since(other) : utc + other rescue utc.since(other) result.in_time_zone(time_zone) end end ``` Adds an interval of time to the current object's time and returns that value as a new [`TimeWithZone`](timewithzone) object. ``` Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' now = Time.zone.now # => Sun, 02 Nov 2014 01:26:28.725182881 EDT -04:00 now + 1000 # => Sun, 02 Nov 2014 01:43:08.725182881 EDT -04:00 ``` If we're adding a [`Duration`](duration) of variable length (i.e., years, months, days), move forward from [`time`](timewithzone#method-i-time), otherwise move forward from [`utc`](timewithzone#method-i-utc), for accuracy when moving across DST boundaries. For instance, a time + 24.hours will advance exactly 24 hours, while a time + 1.day will advance 23-25 hours, depending on the day. ``` now + 24.hours # => Mon, 03 Nov 2014 00:26:28.725182881 EST -05:00 now + 1.day # => Mon, 03 Nov 2014 01:26:28.725182881 EST -05:00 ``` Also aliased as: [since](timewithzone#method-i-since), [in](timewithzone#method-i-in) -(other) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 363 def -(other) if other.acts_like?(:time) to_time - other.to_time elsif duration_of_variable_length?(other) method_missing(:-, other) else result = utc.acts_like?(:date) ? utc.ago(other) : utc - other rescue utc.ago(other) result.in_time_zone(time_zone) end end ``` Subtracts an interval of time and returns a new [`TimeWithZone`](timewithzone) object unless the other value `acts_like?` time. Then it will return a `Float` of the difference between the two times that represents the difference between the current object's time and the `other` time. ``` Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' now = Time.zone.now # => Mon, 03 Nov 2014 00:26:28.725182881 EST -05:00 now - 1000 # => Mon, 03 Nov 2014 00:09:48.725182881 EST -05:00 ``` If subtracting a [`Duration`](duration) of variable length (i.e., years, months, days), move backward from [`time`](timewithzone#method-i-time), otherwise move backward from [`utc`](timewithzone#method-i-utc), for accuracy when moving across DST boundaries. For instance, a time - 24.hours will go subtract exactly 24 hours, while a time - 1.day will subtract 23-25 hours, depending on the day. ``` now - 24.hours # => Sun, 02 Nov 2014 01:26:28.725182881 EDT -04:00 now - 1.day # => Sun, 02 Nov 2014 00:26:28.725182881 EDT -04:00 ``` If both the [`TimeWithZone`](timewithzone) object and the other value act like [`Time`](../time), a `Float` will be returned. ``` Time.zone.now - 1.day.ago # => 86399.999967 ``` <=>(other) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 261 def <=>(other) utc <=> other end ``` Use the time in UTC for comparisons. acts\_like\_time?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 524 def acts_like_time? true end ``` So that `self` `acts_like?(:time)`. advance(options) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 452 def advance(options) # If we're advancing a value of variable length (i.e., years, weeks, months, days), advance from #time, # otherwise advance from #utc, for accuracy when moving across DST boundaries if options.values_at(:years, :weeks, :months, :days).any? method_missing(:advance, options) else utc.advance(options).in_time_zone(time_zone) end end ``` Uses [`Date`](../date) to provide precise [`Time`](../time) calculations for years, months, and days according to the proleptic Gregorian calendar. The result is returned as a new [`TimeWithZone`](timewithzone) object. The `options` parameter takes a hash with any of these keys: `:years`, `:months`, `:weeks`, `:days`, `:hours`, `:minutes`, `:seconds`. If advancing by a value of variable length (i.e., years, weeks, months, days), move forward from [`time`](timewithzone#method-i-time), otherwise move forward from [`utc`](timewithzone#method-i-utc), for accuracy when moving across DST boundaries. ``` Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' now = Time.zone.now # => Sun, 02 Nov 2014 01:26:28.558049687 EDT -04:00 now.advance(seconds: 1) # => Sun, 02 Nov 2014 01:26:29.558049687 EDT -04:00 now.advance(minutes: 1) # => Sun, 02 Nov 2014 01:27:28.558049687 EDT -04:00 now.advance(hours: 1) # => Sun, 02 Nov 2014 01:26:28.558049687 EST -05:00 now.advance(days: 1) # => Mon, 03 Nov 2014 01:26:28.558049687 EST -05:00 now.advance(weeks: 1) # => Sun, 09 Nov 2014 01:26:28.558049687 EST -05:00 now.advance(months: 1) # => Tue, 02 Dec 2014 01:26:28.558049687 EST -05:00 now.advance(years: 1) # => Mon, 02 Nov 2015 01:26:28.558049687 EST -05:00 ``` ago(other) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 391 def ago(other) since(-other) end ``` Subtracts an interval of time from the current object's time and returns the result as a new [`TimeWithZone`](timewithzone) object. ``` Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' now = Time.zone.now # => Mon, 03 Nov 2014 00:26:28.725182881 EST -05:00 now.ago(1000) # => Mon, 03 Nov 2014 00:09:48.725182881 EST -05:00 ``` If we're subtracting a [`Duration`](duration) of variable length (i.e., years, months, days), move backward from [`time`](timewithzone#method-i-time), otherwise move backward from [`utc`](timewithzone#method-i-utc), for accuracy when moving across DST boundaries. For instance, `time.ago(24.hours)` will move back exactly 24 hours, while `time.ago(1.day)` will move back 23-25 hours, depending on the day. ``` now.ago(24.hours) # => Sun, 02 Nov 2014 01:26:28.725182881 EDT -04:00 now.ago(1.day) # => Sun, 02 Nov 2014 00:26:28.725182881 EDT -04:00 ``` as\_json(options = nil) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 176 def as_json(options = nil) if ActiveSupport::JSON::Encoding.use_standard_json_time_format xmlschema(ActiveSupport::JSON::Encoding.time_precision) else %(#{time.strftime("%Y/%m/%d %H:%M:%S")} #{formatted_offset(false)}) end end ``` Coerces time to a string for [`JSON`](json) encoding. The default format is ISO 8601. You can get %Y/%m/%d %H:%M:%S +offset style by setting `ActiveSupport::JSON::Encoding.use_standard_json_time_format` to `false`. ``` # With ActiveSupport::JSON::Encoding.use_standard_json_time_format = true Time.utc(2005,2,1,15,15,10).in_time_zone("Hawaii").to_json # => "2005-02-01T05:15:10.000-10:00" # With ActiveSupport::JSON::Encoding.use_standard_json_time_format = false Time.utc(2005,2,1,15,15,10).in_time_zone("Hawaii").to_json # => "2005/02/01 05:15:10 -1000" ``` between?(min, max) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 269 def between?(min, max) utc.between?(min, max) end ``` Returns true if the current object's time is within the specified `min` and `max` time. blank?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 535 def blank? false end ``` An instance of [`ActiveSupport::TimeWithZone`](timewithzone) is never blank change(options) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 412 def change(options) if options[:zone] && options[:offset] raise ArgumentError, "Can't change both :offset and :zone at the same time: #{options.inspect}" end new_time = time.change(options) if options[:zone] new_zone = ::Time.find_zone(options[:zone]) elsif options[:offset] new_zone = ::Time.find_zone(new_time.utc_offset) end new_zone ||= time_zone periods = new_zone.periods_for_local(new_time) self.class.new(nil, new_zone, new_time, periods.include?(period) ? period : nil) end ``` Returns a new `ActiveSupport::TimeWithZone` where one or more of the elements have been changed according to the `options` parameter. The time options (`:hour`, `:min`, `:sec`, `:usec`, `:nsec`) reset cascadingly, so if only the hour is passed, then minute, sec, usec and nsec is set to 0. If the hour and minute is passed, then sec, usec and nsec is set to 0. The `options` parameter takes a hash with any of these keys: `:year`, `:month`, `:day`, `:hour`, `:min`, `:sec`, `:usec`, `:nsec`, `:offset`, `:zone`. Pass either `:usec` or `:nsec`, not both. Similarly, pass either `:zone` or `:offset`, not both. ``` t = Time.zone.now # => Fri, 14 Apr 2017 11:45:15.116992711 EST -05:00 t.change(year: 2020) # => Tue, 14 Apr 2020 11:45:15.116992711 EST -05:00 t.change(hour: 12) # => Fri, 14 Apr 2017 12:00:00.116992711 EST -05:00 t.change(min: 30) # => Fri, 14 Apr 2017 11:30:00.116992711 EST -05:00 t.change(offset: "-10:00") # => Fri, 14 Apr 2017 11:45:15.116992711 HST -10:00 t.change(zone: "Hawaii") # => Fri, 14 Apr 2017 11:45:15.116992711 HST -10:00 ``` comparable\_time() Alias for: [utc](timewithzone#method-i-utc) dst?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 104 def dst? period.dst? end ``` Returns true if the current time is within Daylight Savings [`Time`](../time) for the specified time zone. ``` Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' Time.zone.parse("2012-5-30").dst? # => true Time.zone.parse("2012-11-30").dst? # => false ``` Also aliased as: [isdst](timewithzone#method-i-isdst) eql?(other) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 304 def eql?(other) other.eql?(utc) end ``` Returns `true` if `other` is equal to current object. formatted\_offset(colon = true, alternate\_utc\_string = nil) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 135 def formatted_offset(colon = true, alternate_utc_string = nil) utc? && alternate_utc_string || TimeZone.seconds_to_utc_offset(utc_offset, colon) end ``` Returns a formatted string of the offset from UTC, or an alternative string if the time zone is already UTC. ``` Time.zone = 'Eastern Time (US & Canada)' # => "Eastern Time (US & Canada)" Time.zone.now.formatted_offset(true) # => "-05:00" Time.zone.now.formatted_offset(false) # => "-0500" Time.zone = 'UTC' # => "UTC" Time.zone.now.formatted_offset(true, "0") # => "0" ``` freeze() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 539 def freeze # preload instance variables before freezing period; utc; time; to_datetime; to_time super end ``` Calls superclass method future?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 299 def future? utc.future? end ``` Returns true if the current object's time is in the future. getgm() Alias for: [utc](timewithzone#method-i-utc) getlocal(utc\_offset = nil) Alias for: [localtime](timewithzone#method-i-localtime) getutc() Alias for: [utc](timewithzone#method-i-utc) gmt?() Alias for: [utc?](timewithzone#method-i-utc-3F) gmt\_offset() Alias for: [utc\_offset](timewithzone#method-i-utc_offset) gmtime() Alias for: [utc](timewithzone#method-i-utc) gmtoff() Alias for: [utc\_offset](timewithzone#method-i-utc_offset) hash() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 308 def hash utc.hash end ``` httpdate() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 196 def httpdate utc.httpdate end ``` Returns a string of the object's date and time in the format used by HTTP requests. ``` Time.zone.now.httpdate # => "Tue, 01 Jan 2013 04:39:43 GMT" ``` in(other) Alias for: [+](timewithzone#method-i-2B) in\_time\_zone(new\_zone = ::Time.zone) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 87 def in_time_zone(new_zone = ::Time.zone) return self if time_zone == new_zone utc.in_time_zone(new_zone) end ``` Returns the simultaneous time in `Time.zone`, or the specified zone. inspect() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 150 def inspect "#{time.strftime('%a, %d %b %Y %H:%M:%S.%9N')} #{zone} #{formatted_offset}" end ``` Returns a string of the object's date, time, zone, and offset from UTC. ``` Time.zone.now.inspect # => "Thu, 04 Dec 2014 11:00:25.624541392 EST -05:00" ``` is\_a?(klass) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 529 def is_a?(klass) klass == ::Time || super end ``` Say we're a [`Time`](../time) to thwart type checking. Calls superclass method Also aliased as: [kind\_of?](timewithzone#method-i-kind_of-3F) isdst() Alias for: [dst?](timewithzone#method-i-dst-3F) iso8601(fraction\_digits = 0) Alias for: [xmlschema](timewithzone#method-i-xmlschema) kind\_of?(klass) Alias for: [is\_a?](timewithzone#method-i-is_a-3F) localtime(utc\_offset = nil) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 93 def localtime(utc_offset = nil) utc.getlocal(utc_offset) end ``` Returns a `Time` instance of the simultaneous time in the system timezone. Also aliased as: [getlocal](timewithzone#method-i-getlocal) marshal\_dump() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 545 def marshal_dump [utc, time_zone.name, time] end ``` marshal\_load(variables) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 549 def marshal_load(variables) initialize(variables[0].utc, ::Time.find_zone(variables[1]), variables[2].utc) end ``` method\_missing(sym, \*args, &block) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 570 def method_missing(sym, *args, &block) wrap_with_time_zone time.__send__(sym, *args, &block) rescue NoMethodError => e raise e, e.message.sub(time.inspect, inspect).sub("Time", "ActiveSupport::TimeWithZone"), e.backtrace end ``` Send the missing method to `time` instance, and wrap result in a new [`TimeWithZone`](timewithzone) with the existing `time_zone`. next\_day?() Alias for: [tomorrow?](timewithzone#method-i-tomorrow-3F) past?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 274 def past? utc.past? end ``` Returns true if the current object's time is in the past. period() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 82 def period @period ||= time_zone.period_for_utc(@utc) end ``` Returns the underlying TZInfo::TimezonePeriod. prev\_day?() Alias for: [yesterday?](timewithzone#method-i-yesterday-3F) respond\_to?(sym, include\_priv = false) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 555 def respond_to?(sym, include_priv = false) # ensure that we're not going to throw and rescue from NoMethodError in method_missing which is slow return false if sym.to_sym == :to_str super end ``` respond\_to\_missing? is not called in some cases, such as when type conversion is performed with Kernel#String Calls superclass method respond\_to\_missing?(sym, include\_priv) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 563 def respond_to_missing?(sym, include_priv) return false if sym.to_sym == :acts_like_date? time.respond_to?(sym, include_priv) end ``` Ensure proxy class responds to all methods that underlying time instance responds to. rfc2822() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 204 def rfc2822 to_formatted_s(:rfc822) end ``` Returns a string of the object's date and time in the RFC 2822 standard format. ``` Time.zone.now.rfc2822 # => "Tue, 01 Jan 2013 04:51:39 +0000" ``` Also aliased as: [rfc822](timewithzone#method-i-rfc822) rfc3339(fraction\_digits = 0) Alias for: [xmlschema](timewithzone#method-i-xmlschema) rfc822() Alias for: [rfc2822](timewithzone#method-i-rfc2822) since(other) Alias for: [+](timewithzone#method-i-2B) strftime(format) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 255 def strftime(format) format = format.gsub(/((?:\A|[^%])(?:%%)*)%Z/, "\\1#{zone}") getlocal(utc_offset).strftime(format) end ``` Replaces `%Z` directive with +zone before passing to Time#strftime, so that zone information is correct. time() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 68 def time @time ||= incorporate_utc_offset(@utc, utc_offset) end ``` Returns a `Time` instance that represents the time in `time_zone`. to\_a() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 475 def to_a [time.sec, time.min, time.hour, time.day, time.mon, time.year, time.wday, time.yday, dst?, zone] end ``` Returns [`Array`](../array) of parts of [`Time`](../time) in sequence of [seconds, minutes, hours, day, month, year, weekday, yearday, dst?, zone]. ``` now = Time.zone.now # => Tue, 18 Aug 2015 02:29:27.485278555 UTC +00:00 now.to_a # => [27, 29, 2, 18, 8, 2015, 2, 230, false, "UTC"] ``` to\_datetime() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 508 def to_datetime @to_datetime ||= utc.to_datetime.new_offset(Rational(utc_offset, 86_400)) end ``` Returns an instance of [`DateTime`](../datetime) with the timezone's UTC offset ``` Time.zone.now.to_datetime # => Tue, 18 Aug 2015 02:32:20 +0000 Time.current.in_time_zone('Hawaii').to_datetime # => Mon, 17 Aug 2015 16:32:20 -1000 ``` to\_f() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 483 def to_f utc.to_f end ``` Returns the object's date and time as a floating-point number of seconds since the Epoch (January 1, 1970 00:00 UTC). ``` Time.zone.now.to_f # => 1417709320.285418 ``` to\_formatted\_s(format = :default) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 241 def to_formatted_s(format = :default) if format == :db utc.to_formatted_s(format) elsif formatter = ::Time::DATE_FORMATS[format] formatter.respond_to?(:call) ? formatter.call(self).to_s : strftime(formatter) else # Change to to_s when deprecation is gone. "#{time.strftime("%Y-%m-%d %H:%M:%S")} #{formatted_offset(false, 'UTC')}" end end ``` Returns a string of the object's date and time. This method is aliased to `to_fs`. Accepts an optional `format`: * `:default` - default value, mimics Ruby Time#to\_s format. * `:db` - format outputs time in UTC :db time. See [`Time#to_formatted_s`](../time#method-i-to_formatted_s)(:db). * Any key in `Time::DATE_FORMATS` can be used. See active\_support/core\_ext/time/conversions.rb. Also aliased as: [to\_fs](timewithzone#method-i-to_fs) to\_fs(format = :default) Alias for: [to\_formatted\_s](timewithzone#method-i-to_formatted_s) to\_i() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 491 def to_i utc.to_i end ``` Returns the object's date and time as an integer number of seconds since the Epoch (January 1, 1970 00:00 UTC). ``` Time.zone.now.to_i # => 1417709320 ``` Also aliased as: [tv\_sec](timewithzone#method-i-tv_sec) to\_r() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 500 def to_r utc.to_r end ``` Returns the object's date and time as a rational number of seconds since the Epoch (January 1, 1970 00:00 UTC). ``` Time.zone.now.to_r # => (708854548642709/500000) ``` to\_s(format = NOT\_SET) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 212 def to_s(format = NOT_SET) if format == :db ActiveSupport::Deprecation.warn( "TimeWithZone#to_s(:db) is deprecated. Please use TimeWithZone#to_formatted_s(:db) instead." ) utc.to_formatted_s(format) elsif formatter = ::Time::DATE_FORMATS[format] ActiveSupport::Deprecation.warn( "TimeWithZone#to_s(#{format.inspect}) is deprecated. Please use TimeWithZone#to_formatted_s(#{format.inspect}) instead." ) formatter.respond_to?(:call) ? formatter.call(self).to_s : strftime(formatter) elsif format == NOT_SET "#{time.strftime("%Y-%m-%d %H:%M:%S")} #{formatted_offset(false, 'UTC')}" # mimicking Ruby Time#to_s format else ActiveSupport::Deprecation.warn( "TimeWithZone#to_s(#{format.inspect}) is deprecated. Please use TimeWithZone#to_formatted_s(#{format.inspect}) instead." ) "#{time.strftime("%Y-%m-%d %H:%M:%S")} #{formatted_offset(false, 'UTC')}" # mimicking Ruby Time#to_s format end end ``` Returns a string of the object's date and time. to\_time() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 515 def to_time if preserve_timezone @to_time_with_instance_offset ||= getlocal(utc_offset) else @to_time_with_system_offset ||= getlocal end end ``` Returns an instance of `Time`, either with the same UTC offset as `self` or in the local system timezone depending on the setting of `ActiveSupport.to_time_preserves_timezone`. today?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 280 def today? time.today? end ``` Returns true if the current object's time falls within the current day. tomorrow?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 286 def tomorrow? time.tomorrow? end ``` Returns true if the current object's time falls within the next day (tomorrow). Also aliased as: [next\_day?](timewithzone#method-i-next_day-3F) tv\_sec() Alias for: [to\_i](timewithzone#method-i-to_i) utc() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 73 def utc @utc ||= incorporate_utc_offset(@time, -utc_offset) end ``` Returns a `Time` instance of the simultaneous time in the UTC timezone. Also aliased as: [comparable\_time](timewithzone#method-i-comparable_time), [getgm](timewithzone#method-i-getgm), [getutc](timewithzone#method-i-getutc), [gmtime](timewithzone#method-i-gmtime) utc?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 115 def utc? zone == "UTC" || zone == "UCT" end ``` Returns true if the current time zone is set to UTC. ``` Time.zone = 'UTC' # => 'UTC' Time.zone.now.utc? # => true Time.zone = 'Eastern Time (US & Canada)' # => 'Eastern Time (US & Canada)' Time.zone.now.utc? # => false ``` Also aliased as: [gmt?](timewithzone#method-i-gmt-3F) utc\_offset() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 121 def utc_offset period.observed_utc_offset end ``` Returns the offset from current time to UTC time in seconds. Also aliased as: [gmt\_offset](timewithzone#method-i-gmt_offset), [gmtoff](timewithzone#method-i-gmtoff) xmlschema(fraction\_digits = 0) Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 158 def xmlschema(fraction_digits = 0) "#{time.strftime(PRECISIONS[fraction_digits.to_i])}#{formatted_offset(true, 'Z')}" end ``` Returns a string of the object's date and time in the ISO 8601 standard format. ``` Time.zone.now.xmlschema # => "2014-12-04T11:02:37-05:00" ``` Also aliased as: [iso8601](timewithzone#method-i-iso8601), [rfc3339](timewithzone#method-i-rfc3339) yesterday?() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 293 def yesterday? time.yesterday? end ``` Returns true if the current object's time falls within the previous day (yesterday). Also aliased as: [prev\_day?](timewithzone#method-i-prev_day-3F) zone() Show source ``` # File activesupport/lib/active_support/time_with_zone.rb, line 143 def zone period.abbreviation end ``` Returns the time zone abbreviation. ``` Time.zone = 'Eastern Time (US & Canada)' # => "Eastern Time (US & Canada)" Time.zone.now.zone # => "EST" ```
programming_docs
rails module ActiveSupport::TaggedLogging module ActiveSupport::TaggedLogging ==================================== Wraps any standard [`Logger`](logger) object to provide tagging capabilities. May be called with a block: ``` logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT)) logger.tagged('BCX') { logger.info 'Stuff' } # Logs "[BCX] Stuff" logger.tagged('BCX', "Jason") { logger.info 'Stuff' } # Logs "[BCX] [Jason] Stuff" logger.tagged('BCX') { logger.tagged('Jason') { logger.info 'Stuff' } } # Logs "[BCX] [Jason] Stuff" ``` If called without a block, a new logger will be returned with applied tags: ``` logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT)) logger.tagged("BCX").info "Stuff" # Logs "[BCX] Stuff" logger.tagged("BCX", "Jason").info "Stuff" # Logs "[BCX] [Jason] Stuff" logger.tagged("BCX").tagged("Jason").info "Stuff" # Logs "[BCX] [Jason] Stuff" ``` This is used by the default [`Rails.logger`](../rails#attribute-c-logger) as configured by Railties to make it easy to stamp log lines with subdomains, request ids, and anything else to aid debugging of multi-user production applications. new(logger) Show source ``` # File activesupport/lib/active_support/tagged_logging.rb, line 81 def self.new(logger) logger = logger.clone if logger.formatter logger.formatter = logger.formatter.dup else # Ensure we set a default formatter so we aren't extending nil! logger.formatter = ActiveSupport::Logger::SimpleFormatter.new end logger.formatter.extend Formatter logger.extend(self) end ``` flush() Show source ``` # File activesupport/lib/active_support/tagged_logging.rb, line 108 def flush clear_tags! super if defined?(super) end ``` Calls superclass method tagged(\*tags) { |self| ... } Show source ``` # File activesupport/lib/active_support/tagged_logging.rb, line 97 def tagged(*tags) if block_given? formatter.tagged(*tags) { yield self } else logger = ActiveSupport::TaggedLogging.new(self) logger.formatter.extend LocalTagStorage logger.push_tags(*formatter.current_tags, *tags) logger end end ``` rails module ActiveSupport::ActionableError module ActiveSupport::ActionableError ====================================== Actionable errors lets you define actions to resolve an error. To make an error actionable, include the `ActiveSupport::ActionableError` module and invoke the `action` class macro to define the action. An action needs a name and a block to execute. rails module ActiveSupport::LazyLoadHooks module ActiveSupport::LazyLoadHooks ==================================== lazy\_load\_hooks allows Rails to lazily load a lot of components and thus making the app boot faster. Because of this feature now there is no need to require `ActiveRecord::Base` at boot time purely to apply configuration. Instead a hook is registered that applies configuration once `ActiveRecord::Base` is loaded. Here `ActiveRecord::Base` is used as example but this feature can be applied elsewhere too. Here is an example where `on_load` method is called to register a hook. ``` initializer 'active_record.initialize_timezone' do ActiveSupport.on_load(:active_record) do self.time_zone_aware_attributes = true self.default_timezone = :utc end end ``` When the entirety of `ActiveRecord::Base` has been evaluated then `run_load_hooks` is invoked. The very last line of `ActiveRecord::Base` is: ``` ActiveSupport.run_load_hooks(:active_record, ActiveRecord::Base) ``` on\_load(name, options = {}, &block) Show source ``` # File activesupport/lib/active_support/lazy_load_hooks.rb, line 41 def on_load(name, options = {}, &block) @loaded[name].each do |base| execute_hook(name, base, options, block) end @load_hooks[name] << [block, options] end ``` Declares a block that will be executed when a Rails component is fully loaded. Options: * `:yield` - Yields the object that [`run_load_hooks`](lazyloadhooks#method-i-run_load_hooks) to `block`. * `:run_once` - Given `block` will run only once. run\_load\_hooks(name, base = Object) Show source ``` # File activesupport/lib/active_support/lazy_load_hooks.rb, line 49 def run_load_hooks(name, base = Object) @loaded[name] << base @load_hooks[name].each do |hook, options| execute_hook(name, base, options, hook) end end ``` rails class ActiveSupport::Reloader class ActiveSupport::Reloader ============================== Parent: [ActiveSupport::ExecutionWrapper](executionwrapper) after\_class\_unload(\*args, &block) Show source ``` # File activesupport/lib/active_support/reloader.rb, line 43 def self.after_class_unload(*args, &block) set_callback(:class_unload, :after, *args, &block) end ``` Registers a callback that will run immediately after the classes are unloaded. before\_class\_unload(\*args, &block) Show source ``` # File activesupport/lib/active_support/reloader.rb, line 38 def self.before_class_unload(*args, &block) set_callback(:class_unload, *args, &block) end ``` Registers a callback that will run immediately before the classes are unloaded. new() Show source ``` # File activesupport/lib/active_support/reloader.rb, line 91 def initialize super @locked = false end ``` Calls superclass method reload!() Show source ``` # File activesupport/lib/active_support/reloader.rb, line 50 def self.reload! executor.wrap do new.tap do |instance| instance.run! ensure instance.complete! end end prepare! end ``` Initiate a manual reload to\_prepare(\*args, &block) Show source ``` # File activesupport/lib/active_support/reloader.rb, line 33 def self.to_prepare(*args, &block) set_callback(:prepare, *args, &block) end ``` Registers a callback that will run once at application startup and every time the code is reloaded. wrap() Show source ``` # File activesupport/lib/active_support/reloader.rb, line 70 def self.wrap executor.wrap do super end end ``` Run the supplied block as a work unit, reloading code as needed Calls superclass method [`ActiveSupport::ExecutionWrapper::wrap`](executionwrapper#method-c-wrap) release\_unload\_lock!() Show source ``` # File activesupport/lib/active_support/reloader.rb, line 106 def release_unload_lock! if @locked @locked = false ActiveSupport::Dependencies.interlock.done_unloading end end ``` Release the unload lock if it has been previously obtained require\_unload\_lock!() Show source ``` # File activesupport/lib/active_support/reloader.rb, line 98 def require_unload_lock! unless @locked ActiveSupport::Dependencies.interlock.start_unloading @locked = true end end ``` Acquire the `ActiveSupport::Dependencies::Interlock` unload lock, ensuring it will be released automatically rails class ActiveSupport::BacktraceCleaner class ActiveSupport::BacktraceCleaner ====================================== Parent: [Object](../object) Backtraces often include many lines that are not relevant for the context under review. This makes it hard to find the signal amongst the backtrace noise, and adds debugging time. With a [`BacktraceCleaner`](backtracecleaner), filters and silencers are used to remove the noisy lines, so that only the most relevant lines remain. Filters are used to modify lines of data, while silencers are used to remove lines entirely. The typical filter use case is to remove lengthy path information from the start of each line, and view file paths relevant to the app directory instead of the file system root. The typical silencer use case is to exclude the output of a noisy library from the backtrace, so that you can focus on the rest. ``` bc = ActiveSupport::BacktraceCleaner.new bc.add_filter { |line| line.gsub(Rails.root.to_s, '') } # strip the Rails.root prefix bc.add_silencer { |line| /puma|rubygems/.match?(line) } # skip any lines from puma or rubygems bc.clean(exception.backtrace) # perform the cleanup ``` To reconfigure an existing [`BacktraceCleaner`](backtracecleaner) (like the default one in Rails) and show as much data as possible, you can always call `BacktraceCleaner#remove_silencers!`, which will restore the backtrace to a pristine state. If you need to reconfigure an existing [`BacktraceCleaner`](backtracecleaner) so that it does not filter or modify the paths of any lines of the backtrace, you can call `BacktraceCleaner#remove_filters!` These two methods will give you a completely untouched backtrace. Inspired by the Quiet Backtrace gem by thoughtbot. FORMATTED\_GEMS\_PATTERN new() Show source ``` # File activesupport/lib/active_support/backtrace_cleaner.rb, line 32 def initialize @filters, @silencers = [], [] add_gem_filter add_gem_silencer add_stdlib_silencer end ``` add\_filter(&block) Show source ``` # File activesupport/lib/active_support/backtrace_cleaner.rb, line 60 def add_filter(&block) @filters << block end ``` Adds a filter from the block provided. Each line in the backtrace will be mapped against this filter. ``` # Will turn "/my/rails/root/app/models/person.rb" into "/app/models/person.rb" backtrace_cleaner.add_filter { |line| line.gsub(Rails.root, '') } ``` add\_silencer(&block) Show source ``` # File activesupport/lib/active_support/backtrace_cleaner.rb, line 69 def add_silencer(&block) @silencers << block end ``` Adds a silencer from the block provided. If the silencer returns `true` for a given line, it will be excluded from the clean backtrace. ``` # Will reject all lines that include the word "puma", like "/gems/puma/server.rb" or "/app/my_puma_server/rb" backtrace_cleaner.add_silencer { |line| /puma/.match?(line) } ``` clean(backtrace, kind = :silent) Show source ``` # File activesupport/lib/active_support/backtrace_cleaner.rb, line 41 def clean(backtrace, kind = :silent) filtered = filter_backtrace(backtrace) case kind when :silent silence(filtered) when :noise noise(filtered) else filtered end end ``` Returns the backtrace after all filters and silencers have been run against it. Filters run first, then silencers. Also aliased as: [filter](backtracecleaner#method-i-filter) filter(backtrace, kind = :silent) Alias for: [clean](backtracecleaner#method-i-clean) remove\_filters!() Show source ``` # File activesupport/lib/active_support/backtrace_cleaner.rb, line 83 def remove_filters! @filters = [] end ``` Removes all filters, but leaves in the silencers. Useful if you suddenly need to see entire filepaths in the backtrace that you had already filtered out. remove\_silencers!() Show source ``` # File activesupport/lib/active_support/backtrace_cleaner.rb, line 76 def remove_silencers! @silencers = [] end ``` Removes all silencers, but leaves in the filters. Useful if your context of debugging suddenly expands as you suspect a bug in one of the libraries you use. rails module ActiveSupport::Multibyte module ActiveSupport::Multibyte ================================ proxy\_class() Show source ``` # File activesupport/lib/active_support/multibyte.rb, line 19 def self.proxy_class @proxy_class ||= ActiveSupport::Multibyte::Chars end ``` Returns the current proxy class. proxy\_class=(klass) Show source ``` # File activesupport/lib/active_support/multibyte.rb, line 14 def self.proxy_class=(klass) @proxy_class = klass end ``` The proxy class returned when calling mb\_chars. You can use this accessor to configure your own proxy class so you can support other encodings. See the [`ActiveSupport::Multibyte::Chars`](multibyte/chars) implementation for an example how to do this. ``` ActiveSupport::Multibyte.proxy_class = CharsForUTF32 ``` rails module ActiveSupport::Autoload module ActiveSupport::Autoload =============================== [`Autoload`](autoload) and eager load conveniences for your library. This module allows you to define autoloads based on Rails conventions (i.e. no need to define the path it is automatically guessed based on the filename) and also define a set of constants that needs to be eager loaded: ``` module MyLib extend ActiveSupport::Autoload autoload :Model eager_autoload do autoload :Cache end end ``` Then your library can be eager loaded by simply calling: ``` MyLib.eager_load! ``` autoload(const\_name, path = @\_at\_path) Show source ``` # File activesupport/lib/active_support/dependencies/autoload.rb, line 37 def autoload(const_name, path = @_at_path) unless path full = [name, @_under_path, const_name.to_s].compact.join("::") path = Inflector.underscore(full) end if @_eager_autoload @_autoloads[const_name] = path end super const_name, path end ``` Calls superclass method autoload\_at(path) { || ... } Show source ``` # File activesupport/lib/active_support/dependencies/autoload.rb, line 57 def autoload_at(path) @_at_path, old_path = path, @_at_path yield ensure @_at_path = old_path end ``` autoload\_under(path) { || ... } Show source ``` # File activesupport/lib/active_support/dependencies/autoload.rb, line 50 def autoload_under(path) @_under_path, old_path = path, @_under_path yield ensure @_under_path = old_path end ``` autoloads() Show source ``` # File activesupport/lib/active_support/dependencies/autoload.rb, line 75 def autoloads @_autoloads end ``` eager\_autoload() { || ... } Show source ``` # File activesupport/lib/active_support/dependencies/autoload.rb, line 64 def eager_autoload old_eager, @_eager_autoload = @_eager_autoload, true yield ensure @_eager_autoload = old_eager end ``` eager\_load!() Show source ``` # File activesupport/lib/active_support/dependencies/autoload.rb, line 71 def eager_load! @_autoloads.each_value { |file| require file } end ``` rails module ActiveSupport::Benchmarkable module ActiveSupport::Benchmarkable ==================================== benchmark(message = "Benchmarking", options = {}) { || ... } Show source ``` # File activesupport/lib/active_support/benchmarkable.rb, line 37 def benchmark(message = "Benchmarking", options = {}, &block) if logger options.assert_valid_keys(:level, :silence) options[:level] ||= :info result = nil ms = Benchmark.ms { result = options[:silence] ? logger.silence(&block) : yield } logger.public_send(options[:level], "%s (%.1fms)" % [ message, ms ]) result else yield end end ``` Allows you to measure the execution time of a block in a template and records the result to the log. Wrap this block around expensive operations or possible bottlenecks to get a time reading for the operation. For example, let's say you thought your file processing method was taking too long; you could wrap it in a benchmark block. ``` <% benchmark 'Process data files' do %> <%= expensive_files_operation %> <% end %> ``` That would add something like “Process data files (345.2ms)” to the log, which you can then use to compare timings when optimizing your code. You may give an optional logger level (`:debug`, `:info`, `:warn`, `:error`) as the `:level` option. The default logger level value is `:info`. ``` <% benchmark 'Low-level files', level: :debug do %> <%= lowlevel_files_operation %> <% end %> ``` Finally, you can pass true as the third argument to silence all log activity (other than the timing information) from inside the block. This is great for boiling down a noisy block to just a single statement that produces one log line: ``` <% benchmark 'Process data files', level: :info, silence: true do %> <%= expensive_and_chatty_files_operation %> <% end %> ``` rails class ActiveSupport::ProxyObject class ActiveSupport::ProxyObject ================================= Parent: BasicObject A class with no predefined methods that behaves similarly to Builder's BlankSlate. Used for proxy classes. raise(\*args) Show source ``` # File activesupport/lib/active_support/proxy_object.rb, line 11 def raise(*args) ::Object.send(:raise, *args) end ``` Let [`ActiveSupport::ProxyObject`](proxyobject) at least raise exceptions. rails class ActiveSupport::OrderedOptions class ActiveSupport::OrderedOptions ==================================== Parent: [Hash](../hash) `OrderedOptions` inherits from `Hash` and provides dynamic accessor methods. With a `Hash`, key-value pairs are typically managed like this: ``` h = {} h[:boy] = 'John' h[:girl] = 'Mary' h[:boy] # => 'John' h[:girl] # => 'Mary' h[:dog] # => nil ``` Using `OrderedOptions`, the above code can be written as: ``` h = ActiveSupport::OrderedOptions.new h.boy = 'John' h.girl = 'Mary' h.boy # => 'John' h.girl # => 'Mary' h.dog # => nil ``` To raise an exception when the value is blank, append a bang to the key name, like: ``` h.dog! # => raises KeyError: :dog is blank ``` [](key) Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 39 def [](key) super(key.to_sym) end ``` Calls superclass method Also aliased as: [\_get](orderedoptions#method-i-_get) []=(key, value) Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 35 def []=(key, value) super(key.to_sym, value) end ``` Calls superclass method \_get(key) Alias for: [[]](orderedoptions#method-i-5B-5D) extractable\_options?() Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 62 def extractable_options? true end ``` inspect() Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 66 def inspect "#<#{self.class.name} #{super}>" end ``` method\_missing(name, \*args) Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 43 def method_missing(name, *args) name_string = +name.to_s if name_string.chomp!("=") self[name_string] = args.first else bangs = name_string.chomp!("!") if bangs self[name_string].presence || raise(KeyError.new(":#{name_string} is blank")) else self[name_string] end end end ``` respond\_to\_missing?(name, include\_private) Show source ``` # File activesupport/lib/active_support/ordered_options.rb, line 58 def respond_to_missing?(name, include_private) true end ``` rails class ActiveSupport::HashWithIndifferentAccess class ActiveSupport::HashWithIndifferentAccess =============================================== Parent: [Hash](../hash) Implements a hash where keys `:foo` and `"foo"` are considered to be the same. ``` rgb = ActiveSupport::HashWithIndifferentAccess.new rgb[:black] = '#000000' rgb[:black] # => '#000000' rgb['black'] # => '#000000' rgb['white'] = '#FFFFFF' rgb[:white] # => '#FFFFFF' rgb['white'] # => '#FFFFFF' ``` Internally symbols are mapped to strings when used as keys in the entire writing interface (calling `[]=`, `merge`, etc). This mapping belongs to the public interface. For example, given: ``` hash = ActiveSupport::HashWithIndifferentAccess.new(a: 1) ``` You are guaranteed that the key is returned as a string: ``` hash.keys # => ["a"] ``` Technically other types of keys are accepted: ``` hash = ActiveSupport::HashWithIndifferentAccess.new(a: 1) hash[0] = 0 hash # => {"a"=>1, 0=>0} ``` but this class is intended for use cases where strings or symbols are the expected keys and it is convenient to understand both as the same. For example the `params` hash in Ruby on Rails. Note that core extensions define `Hash#with_indifferent_access`: ``` rgb = { black: '#000000', white: '#FFFFFF' }.with_indifferent_access ``` which may be handy. To access this class outside of Rails, require the core extension with: ``` require "active_support/core_ext/hash/indifferent_access" ``` which will, in turn, require this file. [](\*args) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 83 def self.[](*args) new.merge!(Hash[*args]) end ``` new(constructor = nil) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 68 def initialize(constructor = nil) if constructor.respond_to?(:to_hash) super() update(constructor) hash = constructor.is_a?(Hash) ? constructor : constructor.to_hash self.default = hash.default if hash.default self.default_proc = hash.default_proc if hash.default_proc elsif constructor.nil? super() else super(constructor) end end ``` Calls superclass method [](key) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 166 def [](key) super(convert_key(key)) end ``` Same as `Hash#[]` where the key passed as argument can be either a string or a symbol: ``` counters = ActiveSupport::HashWithIndifferentAccess.new counters[:foo] = 1 counters['foo'] # => 1 counters[:foo] # => 1 counters[:zoo] # => nil ``` Calls superclass method []=(key, value) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 96 def []=(key, value) regular_writer(convert_key(key), convert_value(value, conversion: :assignment)) end ``` Assigns a new value to the hash: ``` hash = ActiveSupport::HashWithIndifferentAccess.new hash[:key] = 'value' ``` This value can be later fetched using either `:key` or `'key'`. Also aliased as: [regular\_writer](hashwithindifferentaccess#method-i-regular_writer), [store](hashwithindifferentaccess#method-i-store) assoc(key) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 179 def assoc(key) super(convert_key(key)) end ``` Same as `Hash#assoc` where the key passed as argument can be either a string or a symbol: ``` counters = ActiveSupport::HashWithIndifferentAccess.new counters[:foo] = 1 counters.assoc('foo') # => ["foo", 1] counters.assoc(:foo) # => ["foo", 1] counters.assoc(:zoo) # => nil ``` Calls superclass method compact() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 357 def compact dup.tap(&:compact!) end ``` deep\_stringify\_keys() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 311 def deep_stringify_keys; dup end ``` deep\_stringify\_keys!() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 309 def deep_stringify_keys!; self end ``` deep\_symbolize\_keys() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 316 def deep_symbolize_keys; to_hash.deep_symbolize_keys! end ``` default(\*args) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 221 def default(*args) super(*args.map { |arg| convert_key(arg) }) end ``` Same as `Hash#default` where the key passed as argument can be either a string or a symbol: ``` hash = ActiveSupport::HashWithIndifferentAccess.new(1) hash.default # => 1 hash = ActiveSupport::HashWithIndifferentAccess.new { |hash, key| key } hash.default # => nil hash.default('foo') # => 'foo' hash.default(:foo) # => 'foo' ``` Calls superclass method delete(key) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 295 def delete(key) super(convert_key(key)) end ``` Removes the specified key from the hash. Calls superclass method dig(\*args) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 206 def dig(*args) args[0] = convert_key(args[0]) if args.size > 0 super(*args) end ``` Same as `Hash#dig` where the key passed as argument can be either a string or a symbol: ``` counters = ActiveSupport::HashWithIndifferentAccess.new counters[:foo] = { bar: 1 } counters.dig('foo', 'bar') # => 1 counters.dig(:foo, :bar) # => 1 counters.dig(:zoo) # => nil ``` Calls superclass method dup() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 256 def dup self.class.new(self).tap do |new_hash| set_defaults(new_hash) end end ``` Returns a shallow copy of the hash. ``` hash = ActiveSupport::HashWithIndifferentAccess.new({ a: { b: 'b' } }) dup = hash.dup dup[:a][:c] = 'c' hash[:a][:c] # => "c" dup[:a][:c] # => "c" ``` except(\*keys) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 303 def except(*keys) slice(*self.keys - keys.map { |key| convert_key(key) }) end ``` Returns a hash with indifferent access that includes everything except given keys. ``` hash = { a: "x", b: "y", c: 10 }.with_indifferent_access hash.except(:a, "b") # => {c: 10}.with_indifferent_access hash # => { a: "x", b: "y", c: 10 }.with_indifferent_access ``` Also aliased as: [without](hashwithindifferentaccess#method-i-without) extractable\_options?() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 56 def extractable_options? true end ``` Returns `true` so that `Array#extract_options!` finds members of this class. fetch(key, \*extras) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 193 def fetch(key, *extras) super(convert_key(key), *extras) end ``` Same as `Hash#fetch` where the key passed as argument can be either a string or a symbol: ``` counters = ActiveSupport::HashWithIndifferentAccess.new counters[:foo] = 1 counters.fetch('foo') # => 1 counters.fetch(:bar, 0) # => 0 counters.fetch(:bar) { |key| 0 } # => 0 counters.fetch(:zoo) # => KeyError: key not found: "zoo" ``` Calls superclass method fetch\_values(\*indices, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 244 def fetch_values(*indices, &block) super(*indices.map { |key| convert_key(key) }, &block) end ``` Returns an array of the values at the specified indices, but also raises an exception when one of the keys can't be found. ``` hash = ActiveSupport::HashWithIndifferentAccess.new hash[:a] = 'x' hash[:b] = 'y' hash.fetch_values('a', 'b') # => ["x", "y"] hash.fetch_values('a', 'c') { |key| 'z' } # => ["x", "z"] hash.fetch_values('a', 'c') # => KeyError: key not found: "c" ``` Calls superclass method has\_key?(key) Alias for: [key?](hashwithindifferentaccess#method-i-key-3F) include?(key) Alias for: [key?](hashwithindifferentaccess#method-i-key-3F) key?(key) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 149 def key?(key) super(convert_key(key)) end ``` Checks the hash for a key matching the argument passed in: ``` hash = ActiveSupport::HashWithIndifferentAccess.new hash['key'] = 'value' hash.key?(:key) # => true hash.key?('key') # => true ``` Calls superclass method Also aliased as: [include?](hashwithindifferentaccess#method-i-include-3F), [has\_key?](hashwithindifferentaccess#method-i-has_key-3F), [member?](hashwithindifferentaccess#method-i-member-3F) member?(key) Alias for: [key?](hashwithindifferentaccess#method-i-key-3F) merge(\*hashes, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 265 def merge(*hashes, &block) dup.update(*hashes, &block) end ``` This method has the same semantics of `update`, except it does not modify the receiver but rather returns a new hash with indifferent access with the result of the merge. merge!(\*other\_hashes, &block) Alias for: [update](hashwithindifferentaccess#method-i-update) nested\_under\_indifferent\_access() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 64 def nested_under_indifferent_access self end ``` regular\_update(\*other\_hashes, &block) Alias for: [update](hashwithindifferentaccess#method-i-update) regular\_writer(key, value) Alias for: [[]=](hashwithindifferentaccess#method-i-5B-5D-3D) reject(\*args, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 324 def reject(*args, &block) return to_enum(:reject) unless block_given? dup.tap { |hash| hash.reject!(*args, &block) } end ``` replace(other\_hash) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 290 def replace(other_hash) super(self.class.new(other_hash)) end ``` Replaces the contents of this hash with other\_hash. ``` h = { "a" => 100, "b" => 200 } h.replace({ "c" => 300, "d" => 400 }) # => {"c"=>300, "d"=>400} ``` Calls superclass method reverse\_merge(other\_hash) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 275 def reverse_merge(other_hash) super(self.class.new(other_hash)) end ``` Like `merge` but the other way around: Merges the receiver into the argument and returns a new hash with indifferent access as result: ``` hash = ActiveSupport::HashWithIndifferentAccess.new hash['a'] = nil hash.reverse_merge(a: 0, b: 1) # => {"a"=>nil, "b"=>1} ``` Calls superclass method [`Hash#reverse_merge`](../hash#method-i-reverse_merge) Also aliased as: [with\_defaults](hashwithindifferentaccess#method-i-with_defaults) reverse\_merge!(other\_hash) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 281 def reverse_merge!(other_hash) super(self.class.new(other_hash)) end ``` Same semantics as `reverse_merge` but modifies the receiver in-place. Calls superclass method [`Hash#reverse_merge!`](../hash#method-i-reverse_merge-21) Also aliased as: [with\_defaults!](hashwithindifferentaccess#method-i-with_defaults-21) select(\*args, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 319 def select(*args, &block) return to_enum(:select) unless block_given? dup.tap { |hash| hash.select!(*args, &block) } end ``` slice(\*keys) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 347 def slice(*keys) keys.map! { |key| convert_key(key) } self.class.new(super) end ``` Calls superclass method slice!(\*keys) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 352 def slice!(*keys) keys.map! { |key| convert_key(key) } super end ``` Calls superclass method [`Hash#slice!`](../hash#method-i-slice-21) store(key, value) Alias for: [[]=](hashwithindifferentaccess#method-i-5B-5D-3D) stringify\_keys() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 310 def stringify_keys; dup end ``` stringify\_keys!() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 308 def stringify_keys!; self end ``` symbolize\_keys() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 314 def symbolize_keys; to_hash.symbolize_keys! end ``` Also aliased as: [to\_options](hashwithindifferentaccess#method-i-to_options) to\_hash() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 362 def to_hash _new_hash = Hash.new set_defaults(_new_hash) each do |key, value| _new_hash[key] = convert_value(value, conversion: :to_hash) end _new_hash end ``` Convert to a regular hash with string keys. to\_options() Alias for: [symbolize\_keys](hashwithindifferentaccess#method-i-symbolize_keys) to\_options!() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 317 def to_options!; self end ``` transform\_keys(\*args, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 334 def transform_keys(*args, &block) return to_enum(:transform_keys) unless block_given? dup.tap { |hash| hash.transform_keys!(*args, &block) } end ``` transform\_keys!() { |key| ... } Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 339 def transform_keys! return enum_for(:transform_keys!) { size } unless block_given? keys.each do |key| self[yield(key)] = delete(key) end self end ``` transform\_values(\*args, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 329 def transform_values(*args, &block) return to_enum(:transform_values) unless block_given? dup.tap { |hash| hash.transform_values!(*args, &block) } end ``` update(\*other\_hashes, &block) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 130 def update(*other_hashes, &block) if other_hashes.size == 1 update_with_single_argument(other_hashes.first, block) else other_hashes.each do |other_hash| update_with_single_argument(other_hash, block) end end self end ``` Updates the receiver in-place, merging in the hashes passed as arguments: ``` hash_1 = ActiveSupport::HashWithIndifferentAccess.new hash_1[:key] = 'value' hash_2 = ActiveSupport::HashWithIndifferentAccess.new hash_2[:key] = 'New Value!' hash_1.update(hash_2) # => {"key"=>"New Value!"} hash = ActiveSupport::HashWithIndifferentAccess.new hash.update({ "a" => 1 }, { "b" => 2 }) # => { "a" => 1, "b" => 2 } ``` The arguments can be either an `ActiveSupport::HashWithIndifferentAccess` or a regular `Hash`. In either case the merge respects the semantics of indifferent access. If the argument is a regular hash with keys `:key` and `"key"` only one of the values end up in the receiver, but which one is unspecified. When given a block, the value for duplicated keys will be determined by the result of invoking the block with the duplicated key, the value in the receiver, and the value in `other_hash`. The rules for duplicated keys follow the semantics of indifferent access: ``` hash_1[:key] = 10 hash_2['key'] = 12 hash_1.update(hash_2) { |key, old, new| old + new } # => {"key"=>22} ``` Also aliased as: [regular\_update](hashwithindifferentaccess#method-i-regular_update), [merge!](hashwithindifferentaccess#method-i-merge-21) values\_at(\*keys) Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 231 def values_at(*keys) super(*keys.map { |key| convert_key(key) }) end ``` Returns an array of the values at the specified indices: ``` hash = ActiveSupport::HashWithIndifferentAccess.new hash[:a] = 'x' hash[:b] = 'y' hash.values_at('a', 'b') # => ["x", "y"] ``` Calls superclass method with\_defaults(other\_hash) Alias for: [reverse\_merge](hashwithindifferentaccess#method-i-reverse_merge) with\_defaults!(other\_hash) Alias for: [reverse\_merge!](hashwithindifferentaccess#method-i-reverse_merge-21) with\_indifferent\_access() Show source ``` # File activesupport/lib/active_support/hash_with_indifferent_access.rb, line 60 def with_indifferent_access dup end ``` without(\*keys) Alias for: [except](hashwithindifferentaccess#method-i-except)
programming_docs
rails class ActiveSupport::TestCase class ActiveSupport::TestCase ============================== Parent: Minitest::Test Included modules: [ActiveSupport::Testing::Assertions](testing/assertions), [ActiveSupport::Testing::Deprecation](testing/deprecation), [ActiveSupport::Testing::TimeHelpers](testing/timehelpers), [ActiveSupport::Testing::FileFixtures](testing/filefixtures) Assertion parallelize(workers: :number\_of\_processors, with: :processes, threshold: ActiveSupport.test\_parallelization\_threshold) Show source ``` # File activesupport/lib/active_support/test_case.rb, line 79 def parallelize(workers: :number_of_processors, with: :processes, threshold: ActiveSupport.test_parallelization_threshold) workers = Concurrent.physical_processor_count if workers == :number_of_processors workers = ENV["PARALLEL_WORKERS"].to_i if ENV["PARALLEL_WORKERS"] return if workers <= 1 Minitest.parallel_executor = ActiveSupport::Testing::ParallelizeExecutor.new(size: workers, with: with, threshold: threshold) end ``` Parallelizes the test suite. Takes a `workers` argument that controls how many times the process is forked. For each process a new database will be created suffixed with the worker number. ``` test-database-0 test-database-1 ``` If `ENV["PARALLEL_WORKERS"]` is set the workers argument will be ignored and the environment variable will be used instead. This is useful for CI environments, or other environments where you may need more workers than you do for local testing. If the number of workers is set to `1` or fewer, the tests will not be parallelized. If `workers` is set to `:number_of_processors`, the number of workers will be set to the actual core count on the machine you are on. The default parallelization method is to fork processes. If you'd like to use threads instead you can pass `with: :threads` to the `parallelize` method. Note the threaded parallelization does not create multiple database and will not work with system tests at this time. ``` parallelize(workers: :number_of_processors, with: :threads) ``` The threaded parallelization uses minitest's parallel executor directly. The processes parallelization uses a Ruby DRb server. Because parallelization presents an overhead, it is only enabled when the number of tests to run is above the `threshold` param. The default value is 50, and it's configurable via `config.active_support.test_parallelization_threshold`. parallelize\_setup(&block) Show source ``` # File activesupport/lib/active_support/test_case.rb, line 101 def parallelize_setup(&block) ActiveSupport::Testing::Parallelization.after_fork_hook(&block) end ``` Set up hook for parallel testing. This can be used if you have multiple databases or any behavior that needs to be run after the process is forked but before the tests run. Note: this feature is not available with the threaded parallelization. In your `test_helper.rb` add the following: ``` class ActiveSupport::TestCase parallelize_setup do # create databases end end ``` parallelize\_teardown(&block) Show source ``` # File activesupport/lib/active_support/test_case.rb, line 118 def parallelize_teardown(&block) ActiveSupport::Testing::Parallelization.run_cleanup_hook(&block) end ``` Clean up hook for parallel testing. This can be used to drop databases if your app uses multiple write/read databases or other clean up before the tests finish. This runs before the forked process is closed. Note: this feature is not available with the threaded parallelization. In your `test_helper.rb` add the following: ``` class ActiveSupport::TestCase parallelize_teardown do # drop databases end end ``` test\_order() Show source ``` # File activesupport/lib/active_support/test_case.rb, line 42 def test_order ActiveSupport.test_order ||= :random end ``` Returns the order in which test cases are run. ``` ActiveSupport::TestCase.test_order # => :random ``` Possible values are `:random`, `:parallel`, `:alpha`, `:sorted`. Defaults to `:random`. test\_order=(new\_order) Show source ``` # File activesupport/lib/active_support/test_case.rb, line 32 def test_order=(new_order) ActiveSupport.test_order = new_order end ``` Sets the order in which test cases are run. ``` ActiveSupport::TestCase.test_order = :random # => :random ``` Valid values are: * `:random` (to run tests in random order) * `:parallel` (to run tests in parallel) * `:sorted` (to run tests alphabetically by method name) * `:alpha` (equivalent to `:sorted`) rails class ActiveSupport::MessageEncryptor class ActiveSupport::MessageEncryptor ====================================== Parent: [Object](../object) [`MessageEncryptor`](messageencryptor) is a simple way to encrypt values which get stored somewhere you don't trust. The cipher text and initialization vector are base64 encoded and returned to you. This can be used in situations similar to the `MessageVerifier`, but where you don't want users to be able to determine the value of the payload. ``` len = ActiveSupport::MessageEncryptor.key_len salt = SecureRandom.random_bytes(len) key = ActiveSupport::KeyGenerator.new('password').generate_key(salt, len) # => "\x89\xE0\x156\xAC..." crypt = ActiveSupport::MessageEncryptor.new(key) # => #<ActiveSupport::MessageEncryptor ...> encrypted_data = crypt.encrypt_and_sign('my secret data') # => "NlFBTTMwOUV5UlA1QlNEN2xkY2d6eThYWWh..." crypt.decrypt_and_verify(encrypted_data) # => "my secret data" ``` The `decrypt_and_verify` method will raise an `ActiveSupport::MessageEncryptor::InvalidMessage` exception if the data provided cannot be decrypted or verified. ``` crypt.decrypt_and_verify('not encrypted data') # => ActiveSupport::MessageEncryptor::InvalidMessage ``` ### Confining messages to a specific purpose By default any message can be used throughout your app. But they can also be confined to a specific `:purpose`. ``` token = crypt.encrypt_and_sign("this is the chair", purpose: :login) ``` Then that same purpose must be passed when verifying to get the data back out: ``` crypt.decrypt_and_verify(token, purpose: :login) # => "this is the chair" crypt.decrypt_and_verify(token, purpose: :shipping) # => nil crypt.decrypt_and_verify(token) # => nil ``` Likewise, if a message has no purpose it won't be returned when verifying with a specific purpose. ``` token = crypt.encrypt_and_sign("the conversation is lively") crypt.decrypt_and_verify(token, purpose: :scare_tactics) # => nil crypt.decrypt_and_verify(token) # => "the conversation is lively" ``` ### Making messages expire By default messages last forever and verifying one year from now will still return the original value. But messages can be set to expire at a given time with `:expires_in` or `:expires_at`. ``` crypt.encrypt_and_sign(parcel, expires_in: 1.month) crypt.encrypt_and_sign(doowad, expires_at: Time.now.end_of_year) ``` Then the messages can be verified and returned up to the expire time. Thereafter, verifying returns `nil`. ### Rotating keys [`MessageEncryptor`](messageencryptor) also supports rotating out old configurations by falling back to a stack of encryptors. Call `rotate` to build and add an encryptor so `decrypt_and_verify` will also try the fallback. By default any rotated encryptors use the values of the primary encryptor unless specified otherwise. You'd give your encryptor the new defaults: ``` crypt = ActiveSupport::MessageEncryptor.new(@secret, cipher: "aes-256-gcm") ``` Then gradually rotate the old values out by adding them as fallbacks. Any message generated with the old values will then work until the rotation is removed. ``` crypt.rotate old_secret # Fallback to an old secret instead of @secret. crypt.rotate cipher: "aes-256-cbc" # Fallback to an old cipher instead of aes-256-gcm. ``` Though if both the secret and the cipher was changed at the same time, the above should be combined into: ``` crypt.rotate old_secret, cipher: "aes-256-cbc" ``` OpenSSLCipherError key\_len(cipher = default\_cipher) Show source ``` # File activesupport/lib/active_support/message_encryptor.rb, line 163 def self.key_len(cipher = default_cipher) OpenSSL::Cipher.new(cipher).key_len end ``` Given a cipher, returns the key length of the cipher to help generate the key of desired size new(secret, sign\_secret = nil, cipher: nil, digest: nil, serializer: nil) Show source ``` # File activesupport/lib/active_support/message_encryptor.rb, line 141 def initialize(secret, sign_secret = nil, cipher: nil, digest: nil, serializer: nil) @secret = secret @sign_secret = sign_secret @cipher = cipher || self.class.default_cipher @digest = digest || "SHA1" unless aead_mode? @verifier = resolve_verifier @serializer = serializer || Marshal end ``` Initialize a new [`MessageEncryptor`](messageencryptor). `secret` must be at least as long as the cipher key size. For the default 'aes-256-gcm' cipher, this is 256 bits. If you are using a user-entered secret, you can generate a suitable key by using `ActiveSupport::KeyGenerator` or a similar key derivation function. First additional parameter is used as the signature key for `MessageVerifier`. This allows you to specify keys to encrypt and sign data. ``` ActiveSupport::MessageEncryptor.new('secret', 'signature_secret') ``` Options: * `:cipher` - Cipher to use. Can be any cipher returned by `OpenSSL::Cipher.ciphers`. Default is 'aes-256-gcm'. * `:digest` - [`String`](../string) of digest to use for signing. Default is `SHA1`. Ignored when using an AEAD cipher like 'aes-256-gcm'. * `:serializer` - [`Object`](../object) serializer to use. Default is `Marshal`. decrypt\_and\_verify(data, purpose: nil, \*\*) Show source ``` # File activesupport/lib/active_support/message_encryptor.rb, line 158 def decrypt_and_verify(data, purpose: nil, **) _decrypt(verifier.verify(data), purpose) end ``` Decrypt and verify a message. We need to verify the message in order to avoid padding attacks. Reference: [www.limited-entropy.com/padding-oracle-attacks](https://www.limited-entropy.com/padding-oracle-attacks)/. encrypt\_and\_sign(value, expires\_at: nil, expires\_in: nil, purpose: nil) Show source ``` # File activesupport/lib/active_support/message_encryptor.rb, line 152 def encrypt_and_sign(value, expires_at: nil, expires_in: nil, purpose: nil) verifier.generate(_encrypt(value, expires_at: expires_at, expires_in: expires_in, purpose: purpose)) end ``` Encrypt and sign a message. We need to sign the message in order to avoid padding attacks. Reference: [www.limited-entropy.com/padding-oracle-attacks](https://www.limited-entropy.com/padding-oracle-attacks)/. rails module ActiveSupport::Dependencies module ActiveSupport::Dependencies =================================== load\_interlock(&block) Show source ``` # File activesupport/lib/active_support/dependencies.rb, line 24 def self.load_interlock(&block) interlock.loading(&block) end ``` Execute the supplied block while holding an exclusive lock, preventing any other thread from being inside a run\_interlock block at the same time. run\_interlock(&block) Show source ``` # File activesupport/lib/active_support/dependencies.rb, line 17 def self.run_interlock(&block) interlock.running(&block) end ``` Execute the supplied block without interference from any concurrent loads. unload\_interlock(&block) Show source ``` # File activesupport/lib/active_support/dependencies.rb, line 31 def self.unload_interlock(&block) interlock.unloading(&block) end ``` Execute the supplied block while holding an exclusive lock, preventing any other thread from being inside a run\_interlock block at the same time. rails module ActiveSupport::Inflector module ActiveSupport::Inflector ================================ The [`Inflector`](inflector) transforms words from singular to plural, class names to table names, modularized class names to ones without, and class names to foreign keys. The default inflections for pluralization, singularization, and uncountable words are kept in inflections.rb. The Rails core team has stated patches for the inflections library will not be accepted in order to avoid breaking legacy applications which may be relying on errant inflections. If you discover an incorrect inflection and require it for your application or wish to define rules for languages other than English, please correct or add them yourself (explained below). ALLOWED\_ENCODINGS\_FOR\_TRANSLITERATE camelize(term, uppercase\_first\_letter = true) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 69 def camelize(term, uppercase_first_letter = true) string = term.to_s # String#camelize takes a symbol (:upper or :lower), so here we also support :lower to keep the methods consistent. if !uppercase_first_letter || uppercase_first_letter == :lower string = string.sub(inflections.acronyms_camelize_regex) { |match| match.downcase! || match } else string = string.sub(/^[a-z\d]*/) { |match| inflections.acronyms[match] || match.capitalize! || match } end string.gsub!(/(?:_|(\/))([a-z\d]*)/i) do word = $2 substituted = inflections.acronyms[word] || word.capitalize! || word $1 ? "::#{substituted}" : substituted end string end ``` Converts strings to UpperCamelCase. If the `uppercase_first_letter` parameter is set to false, then produces lowerCamelCase. Also converts '/' to '::' which is useful for converting paths to namespaces. ``` camelize('active_model') # => "ActiveModel" camelize('active_model', false) # => "activeModel" camelize('active_model/errors') # => "ActiveModel::Errors" camelize('active_model/errors', false) # => "activeModel::Errors" ``` As a rule of thumb you can think of `camelize` as the inverse of [`underscore`](inflector#method-i-underscore), though there are cases where that does not hold: ``` camelize(underscore('SSLError')) # => "SslError" ``` classify(table\_name) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 208 def classify(table_name) # strip out any leading schema name camelize(singularize(table_name.to_s.sub(/.*\./, ""))) end ``` Creates a class name from a plural table name like Rails does for table names to models. Note that this returns a string and not a [`Class`](../class) (To convert to an actual class follow `classify` with [`constantize`](inflector#method-i-constantize)). ``` classify('ham_and_eggs') # => "HamAndEgg" classify('posts') # => "Post" ``` Singular names are not handled correctly: ``` classify('calculus') # => "Calculu" ``` constantize(camel\_cased\_word) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 279 def constantize(camel_cased_word) Object.const_get(camel_cased_word) end ``` Tries to find a constant with the name specified in the argument string. ``` constantize('Module') # => Module constantize('Foo::Bar') # => Foo::Bar ``` The name is assumed to be the one of a top-level constant, no matter whether it starts with “::” or not. No lexical context is taken into account: ``` C = 'outside' module M C = 'inside' C # => 'inside' constantize('C') # => 'outside', same as ::C end ``` [`NameError`](../nameerror) is raised when the name is not in CamelCase or the constant is unknown. dasherize(underscored\_word) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 216 def dasherize(underscored_word) underscored_word.tr("_", "-") end ``` Replaces underscores with dashes in the string. ``` dasherize('puni_puni') # => "puni-puni" ``` deconstantize(path) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 246 def deconstantize(path) path.to_s[0, path.rindex("::") || 0] # implementation based on the one in facets' Module#spacename end ``` Removes the rightmost segment from the constant expression in the string. ``` deconstantize('Net::HTTP') # => "Net" deconstantize('::Net::HTTP') # => "::Net" deconstantize('String') # => "" deconstantize('::String') # => "" deconstantize('') # => "" ``` See also [`demodulize`](inflector#method-i-demodulize). demodulize(path) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 228 def demodulize(path) path = path.to_s if i = path.rindex("::") path[(i + 2)..-1] else path end end ``` Removes the module part from the expression in the string. ``` demodulize('ActiveSupport::Inflector::Inflections') # => "Inflections" demodulize('Inflections') # => "Inflections" demodulize('::Inflections') # => "Inflections" demodulize('') # => "" ``` See also [`deconstantize`](inflector#method-i-deconstantize). foreign\_key(class\_name, separate\_class\_name\_and\_id\_with\_underscore = true) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 257 def foreign_key(class_name, separate_class_name_and_id_with_underscore = true) underscore(demodulize(class_name)) + (separate_class_name_and_id_with_underscore ? "_id" : "id") end ``` Creates a foreign key name from a class name. `separate_class_name_and_id_with_underscore` sets whether the method should put '\_' between the name and 'id'. ``` foreign_key('Message') # => "message_id" foreign_key('Message', false) # => "messageid" foreign_key('Admin::Post') # => "post_id" ``` humanize(lower\_case\_and\_underscored\_word, capitalize: true, keep\_id\_suffix: false) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 132 def humanize(lower_case_and_underscored_word, capitalize: true, keep_id_suffix: false) result = lower_case_and_underscored_word.to_s.dup inflections.humans.each { |(rule, replacement)| break if result.sub!(rule, replacement) } result.tr!("_", " ") result.lstrip! unless keep_id_suffix result.delete_suffix!(" id") end result.gsub!(/([a-z\d]+)/i) do |match| match.downcase! inflections.acronyms[match] || match end if capitalize result.sub!(/\A\w/) do |match| match.upcase! match end end result end ``` Tweaks an attribute name for display to end users. Specifically, performs these transformations: * Applies human inflection rules to the argument. * Deletes leading underscores, if any. * Removes an “\_id” suffix if present. * Replaces underscores with spaces, if any. * Downcases all words except acronyms. * Capitalizes the first word. The capitalization of the first word can be turned off by setting the `:capitalize` option to false (default is true). The trailing '\_id' can be kept and capitalized by setting the optional parameter `keep_id_suffix` to true (default is false). ``` humanize('employee_salary') # => "Employee salary" humanize('author_id') # => "Author" humanize('author_id', capitalize: false) # => "author" humanize('_id') # => "Id" humanize('author_id', keep_id_suffix: true) # => "Author id" ``` If “SSL” was defined to be an acronym: ``` humanize('ssl_error') # => "SSL error" ``` inflections(locale = :en) { |instance| ... } Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 263 def inflections(locale = :en) if block_given? yield Inflections.instance(locale) else Inflections.instance_or_fallback(locale) end end ``` Yields a singleton instance of [`Inflector::Inflections`](inflector/inflections) so you can specify additional inflector rules. If passed an optional locale, rules for other languages can be specified. If not specified, defaults to `:en`. Only rules for English are provided. ``` ActiveSupport::Inflector.inflections(:en) do |inflect| inflect.uncountable 'rails' end ``` ordinal(number) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 324 def ordinal(number) I18n.translate("number.nth.ordinals", number: number) end ``` Returns the suffix that should be added to a number to denote the position in an ordered sequence such as 1st, 2nd, 3rd, 4th. ``` ordinal(1) # => "st" ordinal(2) # => "nd" ordinal(1002) # => "nd" ordinal(1003) # => "rd" ordinal(-11) # => "th" ordinal(-1021) # => "st" ``` ordinalize(number) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 337 def ordinalize(number) I18n.translate("number.nth.ordinalized", number: number) end ``` Turns a number into an ordinal string used to denote the position in an ordered sequence such as 1st, 2nd, 3rd, 4th. ``` ordinalize(1) # => "1st" ordinalize(2) # => "2nd" ordinalize(1002) # => "1002nd" ordinalize(1003) # => "1003rd" ordinalize(-11) # => "-11th" ordinalize(-1021) # => "-1021st" ``` parameterize(string, separator: "-", preserve\_case: false, locale: nil) Show source ``` # File activesupport/lib/active_support/inflector/transliterate.rb, line 121 def parameterize(string, separator: "-", preserve_case: false, locale: nil) # Replace accented chars with their ASCII equivalents. parameterized_string = transliterate(string, locale: locale) # Turn unwanted chars into the separator. parameterized_string.gsub!(/[^a-z0-9\-_]+/i, separator) unless separator.nil? || separator.empty? if separator == "-" re_duplicate_separator = /-{2,}/ re_leading_trailing_separator = /^-|-$/i else re_sep = Regexp.escape(separator) re_duplicate_separator = /#{re_sep}{2,}/ re_leading_trailing_separator = /^#{re_sep}|#{re_sep}$/i end # No more than one of the separator in a row. parameterized_string.gsub!(re_duplicate_separator, separator) # Remove leading/trailing separator. parameterized_string.gsub!(re_leading_trailing_separator, "") end parameterized_string.downcase! unless preserve_case parameterized_string end ``` Replaces special characters in a string so that it may be used as part of a 'pretty' URL. ``` parameterize("Donald E. Knuth") # => "donald-e-knuth" parameterize("^très|Jolie-- ") # => "tres-jolie" ``` To use a custom separator, override the `separator` argument. ``` parameterize("Donald E. Knuth", separator: '_') # => "donald_e_knuth" parameterize("^très|Jolie__ ", separator: '_') # => "tres_jolie" ``` To preserve the case of the characters in a string, use the `preserve_case` argument. ``` parameterize("Donald E. Knuth", preserve_case: true) # => "Donald-E-Knuth" parameterize("^très|Jolie-- ", preserve_case: true) # => "tres-Jolie" ``` It preserves dashes and underscores unless they are used as separators: ``` parameterize("^très|Jolie__ ") # => "tres-jolie__" parameterize("^très|Jolie-- ", separator: "_") # => "tres_jolie--" parameterize("^très_Jolie-- ", separator: ".") # => "tres_jolie--" ``` If the optional parameter `locale` is specified, the word will be parameterized as a word of that language. By default, this parameter is set to `nil` and it will use the configured `I18n.locale`. pluralize(word, locale = :en) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 32 def pluralize(word, locale = :en) apply_inflections(word, inflections(locale).plurals, locale) end ``` Returns the plural form of the word in the string. If passed an optional `locale` parameter, the word will be pluralized using rules defined for that language. By default, this parameter is set to `:en`. ``` pluralize('post') # => "posts" pluralize('octopus') # => "octopi" pluralize('sheep') # => "sheep" pluralize('words') # => "words" pluralize('CamelOctopus') # => "CamelOctopi" pluralize('ley', :es) # => "leyes" ``` safe\_constantize(camel\_cased\_word) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 305 def safe_constantize(camel_cased_word) constantize(camel_cased_word) rescue NameError => e raise if e.name && !(camel_cased_word.to_s.split("::").include?(e.name.to_s) || e.name.to_s == camel_cased_word.to_s) rescue LoadError => e message = e.respond_to?(:original_message) ? e.original_message : e.message raise unless /Unable to autoload constant #{const_regexp(camel_cased_word)}/.match?(message) end ``` Tries to find a constant with the name specified in the argument string. ``` safe_constantize('Module') # => Module safe_constantize('Foo::Bar') # => Foo::Bar ``` The name is assumed to be the one of a top-level constant, no matter whether it starts with “::” or not. No lexical context is taken into account: ``` C = 'outside' module M C = 'inside' C # => 'inside' safe_constantize('C') # => 'outside', same as ::C end ``` `nil` is returned when the name is not in CamelCase or the constant (or part of it) is unknown. ``` safe_constantize('blargle') # => nil safe_constantize('UnknownModule') # => nil safe_constantize('UnknownModule::Foo::Bar') # => nil ``` singularize(word, locale = :en) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 49 def singularize(word, locale = :en) apply_inflections(word, inflections(locale).singulars, locale) end ``` The reverse of [`pluralize`](inflector#method-i-pluralize), returns the singular form of a word in a string. If passed an optional `locale` parameter, the word will be singularized using rules defined for that language. By default, this parameter is set to `:en`. ``` singularize('posts') # => "post" singularize('octopi') # => "octopus" singularize('sheep') # => "sheep" singularize('word') # => "word" singularize('CamelOctopi') # => "CamelOctopus" singularize('leyes', :es) # => "ley" ``` tableize(class\_name) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 194 def tableize(class_name) pluralize(underscore(class_name)) end ``` Creates the name of a table like Rails does for models to table names. This method uses the [`pluralize`](inflector#method-i-pluralize) method on the last word in the string. ``` tableize('RawScaledScorer') # => "raw_scaled_scorers" tableize('ham_and_egg') # => "ham_and_eggs" tableize('fancyCategory') # => "fancy_categories" ``` titleize(word, keep\_id\_suffix: false) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 182 def titleize(word, keep_id_suffix: false) humanize(underscore(word), keep_id_suffix: keep_id_suffix).gsub(/\b(?<!\w['’`()])[a-z]/) do |match| match.capitalize end end ``` Capitalizes all the words and replaces some characters in the string to create a nicer looking title. `titleize` is meant for creating pretty output. It is not used in the Rails internals. The trailing '\_id','Id'.. can be kept and capitalized by setting the optional parameter `keep_id_suffix` to true. By default, this parameter is false. `titleize` is also aliased as `titlecase`. ``` titleize('man from the boondocks') # => "Man From The Boondocks" titleize('x-men: the last stand') # => "X Men: The Last Stand" titleize('TheManWithoutAPast') # => "The Man Without A Past" titleize('raiders_of_the_lost_ark') # => "Raiders Of The Lost Ark" titleize('string_ending_with_id', keep_id_suffix: true) # => "String Ending With Id" ``` transliterate(string, replacement = "?", locale: nil) Show source ``` # File activesupport/lib/active_support/inflector/transliterate.rb, line 64 def transliterate(string, replacement = "?", locale: nil) string = string.dup if string.frozen? raise ArgumentError, "Can only transliterate strings. Received #{string.class.name}" unless string.is_a?(String) raise ArgumentError, "Cannot transliterate strings with #{string.encoding} encoding" unless ALLOWED_ENCODINGS_FOR_TRANSLITERATE.include?(string.encoding) input_encoding = string.encoding # US-ASCII is a subset of UTF-8 so we'll force encoding as UTF-8 if # US-ASCII is given. This way we can let tidy_bytes handle the string # in the same way as we do for UTF-8 string.force_encoding(Encoding::UTF_8) if string.encoding == Encoding::US_ASCII # GB18030 is Unicode compatible but is not a direct mapping so needs to be # transcoded. Using invalid/undef :replace will result in loss of data in # the event of invalid characters, but since tidy_bytes will replace # invalid/undef with a "?" we're safe to do the same beforehand string.encode!(Encoding::UTF_8, invalid: :replace, undef: :replace) if string.encoding == Encoding::GB18030 transliterated = I18n.transliterate( ActiveSupport::Multibyte::Unicode.tidy_bytes(string).unicode_normalize(:nfc), replacement: replacement, locale: locale ) # Restore the string encoding of the input if it was not UTF-8. # Apply invalid/undef :replace as tidy_bytes does transliterated.encode!(input_encoding, invalid: :replace, undef: :replace) if input_encoding != transliterated.encoding transliterated end ``` Replaces non-ASCII characters with an ASCII approximation, or if none exists, a replacement character which defaults to “?”. ``` transliterate('Ærøskøbing') # => "AEroskobing" ``` Default approximations are provided for Western/Latin characters, e.g, “ø”, “ñ”, “é”, “ß”, etc. This method is I18n aware, so you can set up custom approximations for a locale. This can be useful, for example, to transliterate German's “ü” and “ö” to “ue” and “oe”, or to add support for transliterating Russian to ASCII. In order to make your custom transliterations available, you must set them as the `i18n.transliterate.rule` i18n key: ``` # Store the transliterations in locales/de.yml i18n: transliterate: rule: ü: "ue" ö: "oe" # Or set them using Ruby I18n.backend.store_translations(:de, i18n: { transliterate: { rule: { 'ü' => 'ue', 'ö' => 'oe' } } }) ``` The value for `i18n.transliterate.rule` can be a simple [`Hash`](../hash) that maps characters to ASCII approximations as shown above, or, for more complex requirements, a Proc: ``` I18n.backend.store_translations(:de, i18n: { transliterate: { rule: ->(string) { MyTransliterator.transliterate(string) } } }) ``` Now you can have different transliterations for each locale: ``` transliterate('Jürgen', locale: :en) # => "Jurgen" transliterate('Jürgen', locale: :de) # => "Juergen" ``` Transliteration is restricted to UTF-8, US-ASCII and GB18030 strings Other encodings will raise an ArgumentError. underscore(camel\_cased\_word) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 96 def underscore(camel_cased_word) return camel_cased_word.to_s unless /[A-Z-]|::/.match?(camel_cased_word) word = camel_cased_word.to_s.gsub("::", "/") word.gsub!(inflections.acronyms_underscore_regex) { "#{$1 && '_' }#{$2.downcase}" } word.gsub!(/([A-Z]+)(?=[A-Z][a-z])|([a-z\d])(?=[A-Z])/) { ($1 || $2) << "_" } word.tr!("-", "_") word.downcase! word end ``` Makes an underscored, lowercase form from the expression in the string. Changes '::' to '/' to convert namespaces to paths. ``` underscore('ActiveModel') # => "active_model" underscore('ActiveModel::Errors') # => "active_model/errors" ``` As a rule of thumb you can think of `underscore` as the inverse of [`camelize`](inflector#method-i-camelize), though there are cases where that does not hold: ``` camelize(underscore('SSLError')) # => "SslError" ``` upcase\_first(string) Show source ``` # File activesupport/lib/active_support/inflector/methods.rb, line 163 def upcase_first(string) string.length > 0 ? string[0].upcase.concat(string[1..-1]) : "" end ``` Converts just the first character to uppercase. ``` upcase_first('what a Lovely Day') # => "What a Lovely Day" upcase_first('w') # => "W" upcase_first('') # => "" ```
programming_docs
rails module ActiveSupport::NumberHelper module ActiveSupport::NumberHelper =================================== number\_to\_currency(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 114 def number_to_currency(number, options = {}) NumberToCurrencyConverter.convert(number, options) end ``` Formats a `number` into a currency string (e.g., $13.65). You can customize the format in the `options` hash. The currency unit and number formatting of the current locale will be used unless otherwise specified in the provided options. No currency conversion is performed. If the user is given a way to change their locale, they will also be able to change the relative value of the currency displayed with this helper. If your application will ever support multiple locales, you may want to specify a constant `:locale` option or consider using a library capable of currency conversion. #### Options * `:locale` - Sets the locale to be used for formatting (defaults to current locale). * `:precision` - Sets the level of precision (defaults to 2). * `:round_mode` - Determine how rounding is performed (defaults to :default. See BigDecimal::mode) * `:unit` - Sets the denomination of the currency (defaults to “$”). * `:separator` - Sets the separator between the units (defaults to “.”). * `:delimiter` - Sets the thousands delimiter (defaults to “,”). * `:format` - Sets the format for non-negative numbers (defaults to “%u%n”). Fields are `%u` for the currency, and `%n` for the number. * `:negative_format` - Sets the format for negative numbers (defaults to prepending a hyphen to the formatted number given by `:format`). Accepts the same fields than `:format`, except `%n` is here the absolute value of the number. * `:strip_insignificant_zeros` - If `true` removes insignificant zeros after the decimal separator (defaults to `false`). #### Examples ``` number_to_currency(1234567890.50) # => "$1,234,567,890.50" number_to_currency(1234567890.506) # => "$1,234,567,890.51" number_to_currency(1234567890.506, precision: 3) # => "$1,234,567,890.506" number_to_currency(1234567890.506, locale: :fr) # => "1 234 567 890,51 €" number_to_currency('123a456') # => "$123a456" number_to_currency(-0.456789, precision: 0) # => "$0" number_to_currency(-1234567890.50, negative_format: '(%u%n)') # => "($1,234,567,890.50)" number_to_currency(1234567890.50, unit: '&pound;', separator: ',', delimiter: '') # => "&pound;1234567890,50" number_to_currency(1234567890.50, unit: '&pound;', separator: ',', delimiter: '', format: '%n %u') # => "1234567890,50 &pound;" number_to_currency(1234567890.50, strip_insignificant_zeros: true) # => "$1,234,567,890.5" number_to_currency(1234567890.50, precision: 0, round_mode: :up) # => "$1,234,567,891" ``` number\_to\_delimited(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 189 def number_to_delimited(number, options = {}) NumberToDelimitedConverter.convert(number, options) end ``` Formats a `number` with grouped thousands using `delimiter` (e.g., 12,324). You can customize the format in the `options` hash. #### Options * `:locale` - Sets the locale to be used for formatting (defaults to current locale). * `:delimiter` - Sets the thousands delimiter (defaults to “,”). * `:separator` - Sets the separator between the fractional and integer digits (defaults to “.”). * `:delimiter_pattern` - Sets a custom regular expression used for deriving the placement of delimiter. Helpful when using currency formats like INR. #### Examples ``` number_to_delimited(12345678) # => "12,345,678" number_to_delimited('123456') # => "123,456" number_to_delimited(12345678.05) # => "12,345,678.05" number_to_delimited(12345678, delimiter: '.') # => "12.345.678" number_to_delimited(12345678, delimiter: ',') # => "12,345,678" number_to_delimited(12345678.05, separator: ' ') # => "12,345,678 05" number_to_delimited(12345678.05, locale: :fr) # => "12 345 678,05" number_to_delimited('112a') # => "112a" number_to_delimited(98765432.98, delimiter: ' ', separator: ',') # => "98 765 432,98" number_to_delimited("123456.78", delimiter_pattern: /(\d+?)(?=(\d\d)+(\d)(?!\d))/) # => "1,23,456.78" ``` number\_to\_human(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 391 def number_to_human(number, options = {}) NumberToHumanConverter.convert(number, options) end ``` Pretty prints (formats and approximates) a number in a way it is more readable by humans (e.g.: 1200000000 becomes “1.2 Billion”). This is useful for numbers that can get very large (and too hard to read). See `number_to_human_size` if you want to print a file size. You can also define your own unit-quantifier names if you want to use other decimal units (e.g.: 1500 becomes “1.5 kilometers”, 0.150 becomes “150 milliliters”, etc). You may define a wide range of unit quantifiers, even fractional ones (centi, deci, mili, etc). #### Options * `:locale` - Sets the locale to be used for formatting (defaults to current locale). * `:precision` - Sets the precision of the number (defaults to 3). * `:round_mode` - Determine how rounding is performed (defaults to :default. See BigDecimal::mode) * `:significant` - If `true`, precision will be the number of significant\_digits. If `false`, the number of fractional digits (defaults to `true`) * `:separator` - Sets the separator between the fractional and integer digits (defaults to “.”). * `:delimiter` - Sets the thousands delimiter (defaults to “”). * `:strip_insignificant_zeros` - If `true` removes insignificant zeros after the decimal separator (defaults to `true`) * `:units` - A [`Hash`](../hash) of unit quantifier names. Or a string containing an i18n scope where to find this hash. It might have the following keys: + **integers**: `:unit`, `:ten`, `:hundred`, `:thousand`, `:million`, `:billion`, `:trillion`, `:quadrillion` + **fractionals**: `:deci`, `:centi`, `:mili`, `:micro`, `:nano`, `:pico`, `:femto` * `:format` - Sets the format of the output string (defaults to “%n %u”). The field types are: + %u - The quantifier (ex.: 'thousand') + %n - The number #### Examples ``` number_to_human(123) # => "123" number_to_human(1234) # => "1.23 Thousand" number_to_human(12345) # => "12.3 Thousand" number_to_human(1234567) # => "1.23 Million" number_to_human(1234567890) # => "1.23 Billion" number_to_human(1234567890123) # => "1.23 Trillion" number_to_human(1234567890123456) # => "1.23 Quadrillion" number_to_human(1234567890123456789) # => "1230 Quadrillion" number_to_human(489939, precision: 2) # => "490 Thousand" number_to_human(489939, precision: 4) # => "489.9 Thousand" number_to_human(489939, precision: 2 , round_mode: :down) # => "480 Thousand" number_to_human(1234567, precision: 4, significant: false) # => "1.2346 Million" number_to_human(1234567, precision: 1, separator: ',', significant: false) # => "1,2 Million" number_to_human(500000000, precision: 5) # => "500 Million" number_to_human(12345012345, significant: false) # => "12.345 Billion" ``` Non-significant zeros after the decimal separator are stripped out by default (set `:strip_insignificant_zeros` to `false` to change that): [`number_to_human`](numberhelper#method-i-number_to_human)(12.00001) # => “12” [`number_to_human`](numberhelper#method-i-number_to_human)(12.00001, strip\_insignificant\_zeros: false) # => “12.0” #### Custom Unit Quantifiers You can also use your own custom unit quantifiers: ``` number_to_human(500000, units: { unit: 'ml', thousand: 'lt' }) # => "500 lt" ``` If in your I18n locale you have: ``` distance: centi: one: "centimeter" other: "centimeters" unit: one: "meter" other: "meters" thousand: one: "kilometer" other: "kilometers" billion: "gazillion-distance" ``` Then you could do: ``` number_to_human(543934, units: :distance) # => "544 kilometers" number_to_human(54393498, units: :distance) # => "54400 kilometers" number_to_human(54393498000, units: :distance) # => "54.4 gazillion-distance" number_to_human(343, units: :distance, precision: 1) # => "300 meters" number_to_human(1, units: :distance) # => "1 meter" number_to_human(0.34, units: :distance) # => "34 centimeters" ``` number\_to\_human\_size(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 283 def number_to_human_size(number, options = {}) NumberToHumanSizeConverter.convert(number, options) end ``` Formats the bytes in `number` into a more understandable representation (e.g., giving it 1500 yields 1.46 KB). This method is useful for reporting file sizes to users. You can customize the format in the `options` hash. See `number_to_human` if you want to pretty-print a generic number. #### Options * `:locale` - Sets the locale to be used for formatting (defaults to current locale). * `:precision` - Sets the precision of the number (defaults to 3). * `:round_mode` - Determine how rounding is performed (defaults to :default. See BigDecimal::mode) * `:significant` - If `true`, precision will be the number of significant\_digits. If `false`, the number of fractional digits (defaults to `true`) * `:separator` - Sets the separator between the fractional and integer digits (defaults to “.”). * `:delimiter` - Sets the thousands delimiter (defaults to “”). * `:strip_insignificant_zeros` - If `true` removes insignificant zeros after the decimal separator (defaults to `true`) #### Examples ``` number_to_human_size(123) # => "123 Bytes" number_to_human_size(1234) # => "1.21 KB" number_to_human_size(12345) # => "12.1 KB" number_to_human_size(1234567) # => "1.18 MB" number_to_human_size(1234567890) # => "1.15 GB" number_to_human_size(1234567890123) # => "1.12 TB" number_to_human_size(1234567890123456) # => "1.1 PB" number_to_human_size(1234567890123456789) # => "1.07 EB" number_to_human_size(1234567, precision: 2) # => "1.2 MB" number_to_human_size(483989, precision: 2) # => "470 KB" number_to_human_size(483989, precision: 2, round_mode: :up) # => "480 KB" number_to_human_size(1234567, precision: 2, separator: ',') # => "1,2 MB" number_to_human_size(1234567890123, precision: 5) # => "1.1228 TB" number_to_human_size(524288000, precision: 5) # => "500 MB" ``` number\_to\_percentage(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 154 def number_to_percentage(number, options = {}) NumberToPercentageConverter.convert(number, options) end ``` Formats a `number` as a percentage string (e.g., 65%). You can customize the format in the `options` hash. #### Options * `:locale` - Sets the locale to be used for formatting (defaults to current locale). * `:precision` - Sets the precision of the number (defaults to 3). Keeps the number's precision if `nil`. * `:round_mode` - Determine how rounding is performed (defaults to :default. See BigDecimal::mode) * `:significant` - If `true`, precision will be the number of significant\_digits. If `false`, the number of fractional digits (defaults to `false`). * `:separator` - Sets the separator between the fractional and integer digits (defaults to “.”). * `:delimiter` - Sets the thousands delimiter (defaults to “”). * `:strip_insignificant_zeros` - If `true` removes insignificant zeros after the decimal separator (defaults to `false`). * `:format` - Specifies the format of the percentage string The number field is `%n` (defaults to “%n%”). #### Examples ``` number_to_percentage(100) # => "100.000%" number_to_percentage('98') # => "98.000%" number_to_percentage(100, precision: 0) # => "100%" number_to_percentage(1000, delimiter: '.', separator: ',') # => "1.000,000%" number_to_percentage(302.24398923423, precision: 5) # => "302.24399%" number_to_percentage(1000, locale: :fr) # => "1000,000%" number_to_percentage(1000, precision: nil) # => "1000%" number_to_percentage('98a') # => "98a%" number_to_percentage(100, format: '%n %') # => "100.000 %" number_to_percentage(302.24398923423, precision: 5, round_mode: :down) # => "302.24398%" ``` number\_to\_phone(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 53 def number_to_phone(number, options = {}) NumberToPhoneConverter.convert(number, options) end ``` Formats a `number` into a phone number (US by default e.g., (555) 123-9876). You can customize the format in the `options` hash. #### Options * `:area_code` - Adds parentheses around the area code. * `:delimiter` - Specifies the delimiter to use (defaults to “-”). * `:extension` - Specifies an extension to add to the end of the generated number. * `:country_code` - Sets the country code for the phone number. * `:pattern` - Specifies how the number is divided into three groups with the custom regexp to override the default format. #### Examples ``` number_to_phone(5551234) # => "555-1234" number_to_phone('5551234') # => "555-1234" number_to_phone(1235551234) # => "123-555-1234" number_to_phone(1235551234, area_code: true) # => "(123) 555-1234" number_to_phone(1235551234, delimiter: ' ') # => "123 555 1234" number_to_phone(1235551234, area_code: true, extension: 555) # => "(123) 555-1234 x 555" number_to_phone(1235551234, country_code: 1) # => "+1-123-555-1234" number_to_phone('123a456') # => "123a456" number_to_phone(1235551234, country_code: 1, extension: 1343, delimiter: '.') # => "+1.123.555.1234 x 1343" number_to_phone(75561234567, pattern: /(\d{1,4})(\d{4})(\d{4})$/, area_code: true) # => "(755) 6123-4567" number_to_phone(13312345678, pattern: /(\d{3})(\d{4})(\d{4})$/) # => "133-1234-5678" ``` number\_to\_rounded(number, options = {}) Show source ``` # File activesupport/lib/active_support/number_helper.rb, line 236 def number_to_rounded(number, options = {}) NumberToRoundedConverter.convert(number, options) end ``` Formats a `number` with the specified level of `:precision` (e.g., 112.32 has a precision of 2 if `:significant` is `false`, and 5 if `:significant` is `true`). You can customize the format in the `options` hash. #### Options * `:locale` - Sets the locale to be used for formatting (defaults to current locale). * `:precision` - Sets the precision of the number (defaults to 3). Keeps the number's precision if `nil`. * `:round_mode` - Determine how rounding is performed (defaults to :default. See BigDecimal::mode) * `:significant` - If `true`, precision will be the number of significant\_digits. If `false`, the number of fractional digits (defaults to `false`). * `:separator` - Sets the separator between the fractional and integer digits (defaults to “.”). * `:delimiter` - Sets the thousands delimiter (defaults to “”). * `:strip_insignificant_zeros` - If `true` removes insignificant zeros after the decimal separator (defaults to `false`). #### Examples ``` number_to_rounded(111.2345) # => "111.235" number_to_rounded(111.2345, precision: 2) # => "111.23" number_to_rounded(13, precision: 5) # => "13.00000" number_to_rounded(389.32314, precision: 0) # => "389" number_to_rounded(111.2345, significant: true) # => "111" number_to_rounded(111.2345, precision: 1, significant: true) # => "100" number_to_rounded(13, precision: 5, significant: true) # => "13.000" number_to_rounded(13, precision: nil) # => "13" number_to_rounded(389.32314, precision: 0, round_mode: :up) # => "390" number_to_rounded(111.234, locale: :fr) # => "111,234" number_to_rounded(13, precision: 5, significant: true, strip_insignificant_zeros: true) # => "13" number_to_rounded(389.32314, precision: 4, significant: true) # => "389.3" number_to_rounded(1111.2345, precision: 2, separator: ',', delimiter: '.') # => "1.111,23" ``` rails class ActiveSupport::Logger::SimpleFormatter class ActiveSupport::Logger::SimpleFormatter ============================================= Parent: Logger::Formatter Simple formatter which only displays the message. call(severity, timestamp, progname, msg) Show source ``` # File activesupport/lib/active_support/logger.rb, line 88 def call(severity, timestamp, progname, msg) "#{String === msg ? msg : msg.inspect}\n" end ``` This method is invoked when a log event occurs rails class ActiveSupport::Cache::MemoryStore class ActiveSupport::Cache::MemoryStore ======================================== Parent: [ActiveSupport::Cache::Store](store) A cache store implementation which stores everything into memory in the same process. If you're running multiple Ruby on Rails server processes (which is the case if you're using Phusion Passenger or puma clustered mode), then this means that Rails server process instances won't be able to share cache data with each other and this may not be the most appropriate cache in that scenario. This cache has a bounded size specified by the :size options to the initializer (default is 32Mb). When the cache exceeds the allotted size, a cleanup will occur which tries to prune the cache down to three quarters of the maximum size by removing the least recently used entries. Unlike other [`Cache`](../cache) store implementations, [`MemoryStore`](memorystore) does not compress values by default. [`MemoryStore`](memorystore) does not benefit from compression as much as other [`Store`](store) implementations, as it does not send data over a network. However, when compression is enabled, it still pays the full cost of compression in terms of cpu use. [`MemoryStore`](memorystore) is thread-safe. PER\_ENTRY\_OVERHEAD new(options = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 48 def initialize(options = nil) options ||= {} # Disable compression by default. options[:compress] ||= false super(options) @data = {} @max_size = options[:size] || 32.megabytes @max_prune_time = options[:max_prune_time] || 2 @cache_size = 0 @monitor = Monitor.new @pruning = false end ``` Calls superclass method [`ActiveSupport::Cache::Store::new`](store#method-c-new) supports\_cache\_versioning?() Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 62 def self.supports_cache_versioning? true end ``` Advertise cache versioning support. cleanup(options = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 75 def cleanup(options = nil) options = merged_options(options) instrument(:cleanup, size: @data.size) do keys = synchronize { @data.keys } keys.each do |key| entry = @data[key] delete_entry(key, **options) if entry && entry.expired? end end end ``` Preemptively iterates through all stored keys and removes the ones which have expired. clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 67 def clear(options = nil) synchronize do @data.clear @cache_size = 0 end end ``` Delete all data stored in a given cache store. decrement(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 117 def decrement(name, amount = 1, options = nil) modify_value(name, -amount, options) end ``` Decrement an integer value in the cache. delete\_matched(matcher, options = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 122 def delete_matched(matcher, options = nil) options = merged_options(options) instrument(:delete_matched, matcher.inspect) do matcher = key_matcher(matcher, options) keys = synchronize { @data.keys } keys.each do |key| delete_entry(key, **options) if key.match(matcher) end end end ``` Deletes cache entries if the cache key matches a given pattern. increment(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 112 def increment(name, amount = 1, options = nil) modify_value(name, amount, options) end ``` Increment an integer value in the cache. prune(target\_size, max\_time = nil) Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 88 def prune(target_size, max_time = nil) return if pruning? @pruning = true begin start_time = Process.clock_gettime(Process::CLOCK_MONOTONIC) cleanup instrument(:prune, target_size, from: @cache_size) do keys = synchronize { @data.keys } keys.each do |key| delete_entry(key, **options) return if @cache_size <= target_size || (max_time && Process.clock_gettime(Process::CLOCK_MONOTONIC) - start_time > max_time) end end ensure @pruning = false end end ``` To ensure entries fit within the specified memory prune the cache by removing the least recently accessed entries. pruning?() Show source ``` # File activesupport/lib/active_support/cache/memory_store.rb, line 107 def pruning? @pruning end ``` Returns true if the cache is currently being pruned.
programming_docs
rails class ActiveSupport::Cache::MemCacheStore class ActiveSupport::Cache::MemCacheStore ========================================== Parent: [ActiveSupport::Cache::Store](store) A cache store implementation which stores data in Memcached: [memcached.org](https://memcached.org) This is currently the most popular cache store for production websites. Special features: * Clustering and load balancing. One can specify multiple memcached servers, and [`MemCacheStore`](memcachestore) will load balance between all available servers. If a server goes down, then [`MemCacheStore`](memcachestore) will ignore it until it comes back up. [`MemCacheStore`](memcachestore) implements the [`Strategy::LocalCache`](strategy/localcache) strategy which implements an in-memory cache inside of a block. ESCAPE\_KEY\_CHARS new(\*addresses) Show source ``` # File activesupport/lib/active_support/cache/mem_cache_store.rb, line 109 def initialize(*addresses) addresses = addresses.flatten options = addresses.extract_options! if options.key?(:cache_nils) options[:skip_nil] = !options.delete(:cache_nils) end super(options) unless [String, Dalli::Client, NilClass].include?(addresses.first.class) raise ArgumentError, "First argument must be an empty array, an array of hosts or a Dalli::Client instance." end if addresses.first.is_a?(Dalli::Client) @data = addresses.first else mem_cache_options = options.dup # The value "compress: false" prevents duplicate compression within Dalli. mem_cache_options[:compress] = false (UNIVERSAL_OPTIONS - %i(compress)).each { |name| mem_cache_options.delete(name) } @data = self.class.build_mem_cache(*(addresses + [mem_cache_options])) end end ``` Creates a new [`MemCacheStore`](memcachestore) object, with the given memcached server addresses. Each address is either a host name, or a host-with-port string in the form of “host\_name:port”. For example: ``` ActiveSupport::Cache::MemCacheStore.new("localhost", "server-downstairs.localnetwork:8229") ``` If no addresses are provided, but ENV is defined, it will be used instead. Otherwise, [`MemCacheStore`](memcachestore) will connect to localhost:11211 (the default memcached port). Calls superclass method [`ActiveSupport::Cache::Store::new`](store#method-c-new) supports\_cache\_versioning?() Show source ``` # File activesupport/lib/active_support/cache/mem_cache_store.rb, line 30 def self.supports_cache_versioning? true end ``` Advertise cache versioning support. clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache/mem_cache_store.rb, line 159 def clear(options = nil) rescue_error_with(nil) { @data.with { |c| c.flush_all } } end ``` Clear the entire cache on all memcached servers. This method should be used with care when shared cache is being used. decrement(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/mem_cache_store.rb, line 148 def decrement(name, amount = 1, options = nil) options = merged_options(options) instrument(:decrement, name, amount: amount) do rescue_error_with nil do @data.with { |c| c.decr(normalize_key(name, options), amount, options[:expires_in]) } end end end ``` Decrement a cached value. This method uses the memcached decr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero. increment(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/mem_cache_store.rb, line 135 def increment(name, amount = 1, options = nil) options = merged_options(options) instrument(:increment, name, amount: amount) do rescue_error_with nil do @data.with { |c| c.incr(normalize_key(name, options), amount, options[:expires_in]) } end end end ``` Increment a cached value. This method uses the memcached incr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero. stats() Show source ``` # File activesupport/lib/active_support/cache/mem_cache_store.rb, line 164 def stats @data.with { |c| c.stats } end ``` Get the statistics from the memcached servers. rails class ActiveSupport::Cache::NullStore class ActiveSupport::Cache::NullStore ====================================== Parent: [ActiveSupport::Cache::Store](store) A cache store implementation which doesn't actually store anything. Useful in development and test environments where you don't want caching turned on but need to go through the caching interface. This cache does implement the local cache strategy, so values will actually be cached inside blocks that utilize this strategy. See [`ActiveSupport::Cache::Strategy::LocalCache`](strategy/localcache) for more details. supports\_cache\_versioning?() Show source ``` # File activesupport/lib/active_support/cache/null_store.rb, line 16 def self.supports_cache_versioning? true end ``` Advertise cache versioning support. cleanup(options = nil) Show source ``` # File activesupport/lib/active_support/cache/null_store.rb, line 23 def cleanup(options = nil) end ``` clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache/null_store.rb, line 20 def clear(options = nil) end ``` decrement(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/null_store.rb, line 29 def decrement(name, amount = 1, options = nil) end ``` delete\_matched(matcher, options = nil) Show source ``` # File activesupport/lib/active_support/cache/null_store.rb, line 32 def delete_matched(matcher, options = nil) end ``` increment(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/null_store.rb, line 26 def increment(name, amount = 1, options = nil) end ``` rails class ActiveSupport::Cache::Store class ActiveSupport::Cache::Store ================================== Parent: [Object](../../object) An abstract cache store class. There are multiple cache store implementations, each having its own additional features. See the classes under the [`ActiveSupport::Cache`](../cache) module, e.g. [`ActiveSupport::Cache::MemCacheStore`](memcachestore). [`MemCacheStore`](memcachestore) is currently the most popular cache store for large production websites. Some implementations may not support all methods beyond the basic cache methods of `fetch`, `write`, `read`, `exist?`, and `delete`. [`ActiveSupport::Cache::Store`](store) can store any serializable Ruby object. ``` cache = ActiveSupport::Cache::MemoryStore.new cache.read('city') # => nil cache.write('city', "Duckburgh") cache.read('city') # => "Duckburgh" ``` Keys are always translated into Strings and are case sensitive. When an object is specified as a key and has a `cache_key` method defined, this method will be called to define the key. Otherwise, the `to_param` method will be called. Hashes and Arrays can also be used as keys. The elements will be delimited by slashes, and the elements within a [`Hash`](../../hash) will be sorted by key so they are consistent. ``` cache.read('city') == cache.read(:city) # => true ``` Nil values can be cached. If your cache is on a shared infrastructure, you can define a namespace for your cache entries. If a namespace is defined, it will be prefixed on to every key. The namespace can be either a static value or a Proc. If it is a Proc, it will be invoked when each key is evaluated so that you can use application logic to invalidate keys. ``` cache.namespace = -> { @last_mod_time } # Set the namespace to a variable @last_mod_time = Time.now # Invalidate the entire cache by changing namespace ``` Cached data larger than 1kB are compressed by default. To turn off compression, pass `compress: false` to the initializer or to individual `fetch` or `write` method calls. The 1kB compression threshold is configurable with the `:compress_threshold` option, specified in bytes. options[R] silence[R] silence?[R] new(options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 203 def initialize(options = nil) @options = options ? normalize_options(options) : {} @options[:compress] = true unless @options.key?(:compress) @options[:compress_threshold] = DEFAULT_COMPRESS_LIMIT unless @options.key?(:compress_threshold) @coder = @options.delete(:coder) { default_coder } || NullCoder @coder_supports_compression = @coder.respond_to?(:dump_compressed) end ``` Creates a new cache. The options will be passed to any write method calls except for `:namespace` which can be used to set the global namespace for the cache. cleanup(options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 574 def cleanup(options = nil) raise NotImplementedError.new("#{self.class.name} does not support cleanup") end ``` Cleanups the cache by removing expired entries. Options are passed to the underlying cache implementation. Some implementations may not support this method. clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 584 def clear(options = nil) raise NotImplementedError.new("#{self.class.name} does not support clear") end ``` Clears the entire cache. Be careful with this method since it could affect other processes if shared cache is being used. The options hash is passed to the underlying cache implementation. Some implementations may not support this method. decrement(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 565 def decrement(name, amount = 1, options = nil) raise NotImplementedError.new("#{self.class.name} does not support decrement") end ``` Decrements an integer value in the cache. Options are passed to the underlying cache implementation. Some implementations may not support this method. delete(name, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 506 def delete(name, options = nil) options = merged_options(options) instrument(:delete, name) do delete_entry(normalize_key(name, options), **options) end end ``` Deletes an entry in the cache. Returns `true` if an entry is deleted. Options are passed to the underlying cache implementation. delete\_matched(matcher, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 547 def delete_matched(matcher, options = nil) raise NotImplementedError.new("#{self.class.name} does not support delete_matched") end ``` Deletes all entries with keys matching the pattern. Options are passed to the underlying cache implementation. Some implementations may not support this method. delete\_multi(names, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 517 def delete_multi(names, options = nil) options = merged_options(options) names.map! { |key| normalize_key(key, options) } instrument :delete_multi, names do delete_multi_entries(names, **options) end end ``` Deletes multiple entries in the cache. Options are passed to the underlying cache implementation. exist?(name, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 529 def exist?(name, options = nil) options = merged_options(options) instrument(:exist?, name) do |payload| entry = read_entry(normalize_key(name, options), **options, event: payload) (entry && !entry.expired? && !entry.mismatched?(normalize_version(name, options))) || false end end ``` Returns `true` if the cache contains an entry for the given key. Options are passed to the underlying cache implementation. fetch(name, options = nil, &block) Show source ``` # File activesupport/lib/active_support/cache.rb, line 349 def fetch(name, options = nil, &block) if block_given? options = merged_options(options) key = normalize_key(name, options) entry = nil instrument(:read, name, options) do |payload| cached_entry = read_entry(key, **options, event: payload) unless options[:force] entry = handle_expired_entry(cached_entry, key, options) entry = nil if entry && entry.mismatched?(normalize_version(name, options)) payload[:super_operation] = :fetch if payload payload[:hit] = !!entry if payload end if entry get_entry_value(entry, name, options) else save_block_result_to_cache(name, options, &block) end elsif options && options[:force] raise ArgumentError, "Missing block: Calling `Cache#fetch` with `force: true` requires a block." else read(name, options) end end ``` Fetches data from the cache, using the given key. If there is data in the cache with the given key, then that data is returned. If there is no such data in the cache (a cache miss), then `nil` will be returned. However, if a block has been passed, that block will be passed the key and executed in the event of a cache miss. The return value of the block will be written to the cache under the given cache key, and that return value will be returned. ``` cache.write('today', 'Monday') cache.fetch('today') # => "Monday" cache.fetch('city') # => nil cache.fetch('city') do 'Duckburgh' end cache.fetch('city') # => "Duckburgh" ``` You may also specify additional options via the `options` argument. Setting `force: true` forces a cache “miss,” meaning we treat the cache value as missing even if it's present. Passing a block is required when `force` is true so this always results in a cache write. ``` cache.write('today', 'Monday') cache.fetch('today', force: true) { 'Tuesday' } # => 'Tuesday' cache.fetch('today', force: true) # => ArgumentError ``` The `:force` option is useful when you're calling some other method to ask whether you should force a cache write. Otherwise, it's clearer to just call `Cache#write`. Setting `skip_nil: true` will not cache nil result: ``` cache.fetch('foo') { nil } cache.fetch('bar', skip_nil: true) { nil } cache.exist?('foo') # => true cache.exist?('bar') # => false ``` Setting `compress: false` disables compression of the cache entry. Setting `:expires_in` will set an expiration time on the cache. All caches support auto-expiring content after a specified number of seconds. This value can be specified as an option to the constructor (in which case all entries will be affected), or it can be supplied to the `fetch` or `write` method to affect just one entry. `:expire_in` and `:expired_in` are aliases for `:expires_in`. ``` cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 5.minutes) cache.write(key, value, expires_in: 1.minute) # Set a lower value for one entry ``` Setting `:expires_at` will set an absolute expiration time on the cache. All caches support auto-expiring content after a specified number of seconds. This value can only be supplied to the `fetch` or `write` method to affect just one entry. ``` cache = ActiveSupport::Cache::MemoryStore.new cache.write(key, value, expires_at: Time.now.at_end_of_hour) ``` Setting `:version` verifies the cache stored under `name` is of the same version. nil is returned on mismatches despite contents. This feature is used to support recyclable cache keys. Setting `:race_condition_ttl` is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load several different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in `:race_condition_ttl`. Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a bit longer. In the meantime that first process will go ahead and will write into cache the new value. After that all the processes will start getting the new value. The key is to keep `:race_condition_ttl` small. If the process regenerating the entry errors out, the entry will be regenerated after the specified number of seconds. Also note that the life of stale cache is extended only if it expired recently. Otherwise a new value is generated and `:race_condition_ttl` does not play any role. ``` # Set all values to expire after one minute. cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 1.minute) cache.write('foo', 'original value') val_1 = nil val_2 = nil sleep 60 Thread.new do val_1 = cache.fetch('foo', race_condition_ttl: 10.seconds) do sleep 1 'new value 1' end end Thread.new do val_2 = cache.fetch('foo', race_condition_ttl: 10.seconds) do 'new value 2' end end cache.fetch('foo') # => "original value" sleep 10 # First thread extended the life of cache by another 10 seconds cache.fetch('foo') # => "new value 1" val_1 # => "new value 1" val_2 # => "original value" ``` Other options will be handled by the specific cache store implementation. Internally, [`fetch`](store#method-i-fetch) calls read\_entry, and calls write\_entry on a cache miss. `options` will be passed to the [`read`](store#method-i-read) and [`write`](store#method-i-write) calls. For example, MemCacheStore's [`write`](store#method-i-write) method supports the `:raw` option, which tells the memcached server to store all values as strings. We can use this option with [`fetch`](store#method-i-fetch) too: ``` cache = ActiveSupport::Cache::MemCacheStore.new cache.fetch("foo", force: true, raw: true) do :bar end cache.fetch('foo') # => "bar" ``` fetch\_multi(\*names) { |name| ... } Show source ``` # File activesupport/lib/active_support/cache.rb, line 469 def fetch_multi(*names) raise ArgumentError, "Missing block: `Cache#fetch_multi` requires a block." unless block_given? options = names.extract_options! options = merged_options(options) instrument :read_multi, names, options do |payload| reads = read_multi_entries(names, **options) writes = {} ordered = names.index_with do |name| reads.fetch(name) { writes[name] = yield(name) } end payload[:hits] = reads.keys payload[:super_operation] = :fetch_multi write_multi(writes, options) ordered end end ``` Fetches data from the cache, using the given keys. If there is data in the cache with the given keys, then that data is returned. Otherwise, the supplied block is called for each key for which there was no data, and the result will be written to the cache and returned. Therefore, you need to pass a block that returns the data to be written to the cache. If you do not want to write the cache when the cache is not found, use [`read_multi`](store#method-i-read_multi). Returns a hash with the data for each of the names. For example: ``` cache.write("bim", "bam") cache.fetch_multi("bim", "unknown_key") do |key| "Fallback value for key: #{key}" end # => { "bim" => "bam", # "unknown_key" => "Fallback value for key: unknown_key" } ``` Options are passed to the underlying cache implementation. For example: ``` cache.fetch_multi("fizz", expires_in: 5.seconds) do |key| "buzz" end # => {"fizz"=>"buzz"} cache.read("fizz") # => "buzz" sleep(6) cache.read("fizz") # => nil ``` increment(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 556 def increment(name, amount = 1, options = nil) raise NotImplementedError.new("#{self.class.name} does not support increment") end ``` Increments an integer value in the cache. Options are passed to the underlying cache implementation. Some implementations may not support this method. mute() { || ... } Show source ``` # File activesupport/lib/active_support/cache.rb, line 219 def mute previous_silence, @silence = defined?(@silence) && @silence, true yield ensure @silence = previous_silence end ``` Silences the logger within a block. read(name, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 384 def read(name, options = nil) options = merged_options(options) key = normalize_key(name, options) version = normalize_version(name, options) instrument(:read, name, options) do |payload| entry = read_entry(key, **options, event: payload) if entry if entry.expired? delete_entry(key, **options) payload[:hit] = false if payload nil elsif entry.mismatched?(version) payload[:hit] = false if payload nil else payload[:hit] = true if payload entry.value end else payload[:hit] = false if payload nil end end end ``` Reads data from the cache, using the given key. If there is data in the cache with the given key, then that data is returned. Otherwise, `nil` is returned. Note, if data was written with the `:expires_in` or `:version` options, both of these conditions are applied before the data is returned. Options are passed to the underlying cache implementation. read\_multi(\*names) Show source ``` # File activesupport/lib/active_support/cache.rb, line 417 def read_multi(*names) options = names.extract_options! options = merged_options(options) instrument :read_multi, names, options do |payload| read_multi_entries(names, **options, event: payload).tap do |results| payload[:hits] = results.keys end end end ``` Reads multiple values at once from the cache. Options can be passed in the last argument. Some cache implementation may optimize this method. Returns a hash mapping the names provided to the values found. silence!() Show source ``` # File activesupport/lib/active_support/cache.rb, line 213 def silence! @silence = true self end ``` Silences the logger. write(name, value, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 494 def write(name, value, options = nil) options = merged_options(options) instrument(:write, name, options) do entry = Entry.new(value, **options.merge(version: normalize_version(name, options))) write_entry(normalize_key(name, options), entry, **options) end end ``` Writes the value to the cache, with the key. Options are passed to the underlying cache implementation. write\_multi(hash, options = nil) Show source ``` # File activesupport/lib/active_support/cache.rb, line 429 def write_multi(hash, options = nil) options = merged_options(options) instrument :write_multi, hash, options do |payload| entries = hash.each_with_object({}) do |(name, value), memo| memo[normalize_key(name, options)] = Entry.new(value, **options.merge(version: normalize_version(name, options))) end write_multi_entries entries, **options end end ``` [`Cache`](../cache) Storage API to write multiple values at once. key\_matcher(pattern, options) Show source ``` # File activesupport/lib/active_support/cache.rb, line 597 def key_matcher(pattern, options) # :doc: prefix = options[:namespace].is_a?(Proc) ? options[:namespace].call : options[:namespace] if prefix source = pattern.source if source.start_with?("^") source = source[1, source.length] else source = ".*#{source[0, source.length]}" end Regexp.new("^#{Regexp.escape(prefix)}:#{source}", pattern.options) else pattern end end ``` Adds the namespace defined in the options to a pattern designed to match keys. Implementations that support [`delete_matched`](store#method-i-delete_matched) should call this method to translate a pattern that matches names into one that matches namespaced keys.
programming_docs
rails class ActiveSupport::Cache::RedisCacheStore class ActiveSupport::Cache::RedisCacheStore ============================================ Parent: [ActiveSupport::Cache::Store](store) Redis cache store. Deployment note: Take care to use a \*dedicated Redis cache\* rather than pointing this at your existing Redis server. It won't cope well with mixed usage patterns and it won't expire cache entries by default. Redis cache server setup guide: [redis.io/topics/lru-cache](https://redis.io/topics/lru-cache) * Supports vanilla Redis, hiredis, and Redis::Distributed. * Supports Memcached-like sharding across Redises with Redis::Distributed. * Fault tolerant. If the Redis server is unavailable, no exceptions are raised. [`Cache`](../cache) fetches are all misses and writes are dropped. * Local cache. Hot in-memory primary cache within block/middleware scope. * `read_multi` and `write_multi` support for Redis mget/mset. Use Redis::Distributed 4.0.1+ for distributed mget support. * `delete_matched` support for Redis KEYS globs. DEFAULT\_ERROR\_HANDLER DEFAULT\_REDIS\_OPTIONS MAX\_KEY\_BYTESIZE Keys are truncated with the [`ActiveSupport`](../../activesupport) digest if they exceed 1kB max\_key\_bytesize[R] redis\_options[R] new(namespace: nil, compress: true, compress\_threshold: 1.kilobyte, coder: default\_coder, expires\_in: nil, race\_condition\_ttl: nil, error\_handler: DEFAULT\_ERROR\_HANDLER, \*\*redis\_options) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 143 def initialize(namespace: nil, compress: true, compress_threshold: 1.kilobyte, coder: default_coder, expires_in: nil, race_condition_ttl: nil, error_handler: DEFAULT_ERROR_HANDLER, **redis_options) @redis_options = redis_options @max_key_bytesize = MAX_KEY_BYTESIZE @error_handler = error_handler super namespace: namespace, compress: compress, compress_threshold: compress_threshold, expires_in: expires_in, race_condition_ttl: race_condition_ttl, coder: coder end ``` Creates a new Redis cache store. Handles four options: :redis block, :redis instance, single :url string, and multiple :url strings. ``` Option Class Result :redis Proc -> options[:redis].call :redis Object -> options[:redis] :url String -> Redis.new(url: …) :url Array -> Redis::Distributed.new([{ url: … }, { url: … }, …]) ``` No namespace is set by default. Provide one if the Redis cache server is shared with other apps: `namespace: 'myapp-cache'`. Compression is enabled by default with a 1kB threshold, so cached values larger than 1kB are automatically compressed. Disable by passing `compress: false` or change the threshold by passing `compress_threshold: 4.kilobytes`. No expiry is set on cache entries by default. Redis is expected to be configured with an eviction policy that automatically deletes least-recently or -frequently used keys when it reaches max memory. See [redis.io/topics/lru-cache](https://redis.io/topics/lru-cache) for cache server setup. Race condition TTL is not set by default. This can be used to avoid “thundering herd” cache writes when hot cache entries are expired. See `ActiveSupport::Cache::Store#fetch` for more. Calls superclass method [`ActiveSupport::Cache::Store::new`](store#method-c-new) supports\_cache\_versioning?() Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 69 def self.supports_cache_versioning? true end ``` Advertise cache versioning support. cleanup(options = nil) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 275 def cleanup(options = nil) super end ``` [`Cache`](../cache) [`Store`](store) API implementation. Removes expired entries. Handled natively by Redis least-recently-/ least-frequently-used expiry, so manual cleanup is not supported. Calls superclass method [`ActiveSupport::Cache::Store#cleanup`](store#method-i-cleanup) clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 283 def clear(options = nil) failsafe :clear do if namespace = merged_options(options)[:namespace] delete_matched "*", namespace: namespace else redis.with { |c| c.flushdb } end end end ``` Clear the entire cache on all Redis servers. Safe to use on shared servers if the cache is namespaced. Failsafe: Raises errors. decrement(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 256 def decrement(name, amount = 1, options = nil) instrument :decrement, name, amount: amount do failsafe :decrement do options = merged_options(options) key = normalize_key(name, options) redis.with do |c| c.decrby(key, amount).tap do write_key_expiry(c, key, options) end end end end end ``` [`Cache`](../cache) [`Store`](store) API implementation. Decrement a cached value. This method uses the Redis decr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero. Failsafe: Raises errors. delete\_matched(matcher, options = nil) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 204 def delete_matched(matcher, options = nil) instrument :delete_matched, matcher do unless String === matcher raise ArgumentError, "Only Redis glob strings are supported: #{matcher.inspect}" end redis.with do |c| pattern = namespace_key(matcher, options) cursor = "0" # Fetch keys in batches using SCAN to avoid blocking the Redis server. nodes = c.respond_to?(:nodes) ? c.nodes : [c] nodes.each do |node| begin cursor, keys = node.scan(cursor, match: pattern, count: SCAN_BATCH_SIZE) node.del(*keys) unless keys.empty? end until cursor == "0" end end end end ``` [`Cache`](../cache) [`Store`](store) API implementation. Supports Redis KEYS glob patterns: ``` h?llo matches hello, hallo and hxllo h*llo matches hllo and heeeello h[ae]llo matches hello and hallo, but not hillo h[^e]llo matches hallo, hbllo, ... but not hello h[a-b]llo matches hallo and hbllo ``` Use \ to escape special characters if you want to match them verbatim. See [redis.io/commands/KEYS](https://redis.io/commands/KEYS) for more. Failsafe: Raises errors. increment(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 233 def increment(name, amount = 1, options = nil) instrument :increment, name, amount: amount do failsafe :increment do options = merged_options(options) key = normalize_key(name, options) redis.with do |c| c.incrby(key, amount).tap do write_key_expiry(c, key, options) end end end end end ``` [`Cache`](../cache) [`Store`](store) API implementation. Increment a cached value. This method uses the Redis incr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero. Failsafe: Raises errors. inspect() Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 168 def inspect instance = @redis || @redis_options "#<#{self.class} options=#{options.inspect} redis=#{instance.inspect}>" end ``` read\_multi(\*names) Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 177 def read_multi(*names) if mget_capable? instrument(:read_multi, names, options) do |payload| read_multi_mget(*names).tap do |results| payload[:hits] = results.keys end end else super end end ``` [`Cache`](../cache) [`Store`](store) API implementation. Read multiple values at once. Returns a hash of requested keys -> fetched values. Calls superclass method [`ActiveSupport::Cache::Store#read_multi`](store#method-i-read_multi) redis() Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 155 def redis @redis ||= begin pool_options = self.class.send(:retrieve_pool_options, redis_options) if pool_options.any? self.class.send(:ensure_connection_pool_added!) ::ConnectionPool.new(pool_options) { self.class.build_redis(**redis_options) } else self.class.build_redis(**redis_options) end end end ``` stats() Show source ``` # File activesupport/lib/active_support/cache/redis_cache_store.rb, line 294 def stats redis.with { |c| c.info } end ``` Get info from redis servers. rails class ActiveSupport::Cache::FileStore class ActiveSupport::Cache::FileStore ====================================== Parent: [ActiveSupport::Cache::Store](store) A cache store implementation which stores everything on the filesystem. [`FileStore`](filestore) implements the [`Strategy::LocalCache`](strategy/localcache) strategy which implements an in-memory cache inside of a block. DIR\_FORMATTER FILENAME\_MAX\_SIZE FILEPATH\_MAX\_SIZE GITKEEP\_FILES cache\_path[R] new(cache\_path, \*\*options) Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 21 def initialize(cache_path, **options) super(options) @cache_path = cache_path.to_s end ``` Calls superclass method [`ActiveSupport::Cache::Store::new`](store#method-c-new) supports\_cache\_versioning?() Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 27 def self.supports_cache_versioning? true end ``` Advertise cache versioning support. cleanup(options = nil) Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 41 def cleanup(options = nil) options = merged_options(options) search_dir(cache_path) do |fname| entry = read_entry(fname, **options) delete_entry(fname, **options) if entry && entry.expired? end end ``` Preemptively iterates through all stored keys and removes the ones which have expired. clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 34 def clear(options = nil) root_dirs = (Dir.children(cache_path) - GITKEEP_FILES) FileUtils.rm_r(root_dirs.collect { |f| File.join(cache_path, f) }) rescue Errno::ENOENT, Errno::ENOTEMPTY end ``` Deletes all items from the cache. In this case it deletes all the entries in the specified file store directory except for .keep or .gitkeep. Be careful which directory is specified in your config file when using `FileStore` because everything in that directory will be deleted. decrement(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 57 def decrement(name, amount = 1, options = nil) modify_value(name, -amount, options) end ``` Decrements an already existing integer value that is stored in the cache. If the key is not found nothing is done. delete\_matched(matcher, options = nil) Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 61 def delete_matched(matcher, options = nil) options = merged_options(options) instrument(:delete_matched, matcher.inspect) do matcher = key_matcher(matcher, options) search_dir(cache_path) do |path| key = file_path_key(path) delete_entry(path, **options) if key.match(matcher) end end end ``` increment(name, amount = 1, options = nil) Show source ``` # File activesupport/lib/active_support/cache/file_store.rb, line 51 def increment(name, amount = 1, options = nil) modify_value(name, amount, options) end ``` Increments an already existing integer value that is stored in the cache. If the key is not found nothing is done. rails module ActiveSupport::Cache::Strategy::LocalCache module ActiveSupport::Cache::Strategy::LocalCache ================================================== Caches that implement [`LocalCache`](localcache) will be backed by an in-memory cache for the duration of a block. Repeated calls to the cache for the same key will hit the in-memory cache for faster access. middleware() Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 69 def middleware @middleware ||= Middleware.new( "ActiveSupport::Cache::Strategy::LocalCache", local_cache_key) end ``` Middleware class can be inserted as a Rack handler to be local cache for the duration of request. with\_local\_cache(&block) Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 63 def with_local_cache(&block) use_temporary_local_cache(LocalStore.new, &block) end ``` Use a local cache for the duration of block. rails class ActiveSupport::Cache::Strategy::LocalCache::LocalStore class ActiveSupport::Cache::Strategy::LocalCache::LocalStore ============================================================= Parent: [Object](../../../../object) Simple memory backed cache. This cache is not thread safe and is intended only for serving as a temporary memory cache for a single thread. new() Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 32 def initialize @data = {} end ``` clear(options = nil) Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 36 def clear(options = nil) @data.clear end ``` delete\_entry(key) Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 53 def delete_entry(key) [email protected](key) end ``` read\_entry(key) Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 40 def read_entry(key) @data[key] end ``` read\_multi\_entries(keys) Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 44 def read_multi_entries(keys) @data.slice(*keys) end ``` write\_entry(key, entry) Show source ``` # File activesupport/lib/active_support/cache/strategy/local_cache.rb, line 48 def write_entry(key, entry) @data[key] = entry true end ``` rails module ActiveSupport::Callbacks::ClassMethods module ActiveSupport::Callbacks::ClassMethods ============================================== define\_callbacks(\*names) Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 917 def define_callbacks(*names) options = names.extract_options! names.each do |name| name = name.to_sym ([self] + self.descendants).each do |target| target.set_callbacks name, CallbackChain.new(name, options) end module_eval <<-RUBY, __FILE__, __LINE__ + 1 def _run_#{name}_callbacks(&block) run_callbacks #{name.inspect}, &block end def self._#{name}_callbacks get_callbacks(#{name.inspect}) end def self._#{name}_callbacks=(value) set_callbacks(#{name.inspect}, value) end def _#{name}_callbacks __callbacks[#{name.inspect}] end RUBY end end ``` Define sets of events in the object life cycle that support callbacks. ``` define_callbacks :validate define_callbacks :initialize, :save, :destroy ``` ##### Options * `:terminator` - Determines when a before filter will halt the callback chain, preventing following before and around callbacks from being called and the event from being triggered. This should be a lambda to be executed. The current object and the result lambda of the callback will be provided to the terminator lambda. ``` define_callbacks :validate, terminator: ->(target, result_lambda) { result_lambda.call == false } ``` In this example, if any before validate callbacks returns `false`, any successive before and around callback is not executed. The default terminator halts the chain when a callback throws `:abort`. * `:skip_after_callbacks_if_terminated` - Determines if after callbacks should be terminated by the `:terminator` option. By default after callbacks are executed no matter if callback chain was terminated or not. This option has no effect if `:terminator` option is set to `nil`. * `:scope` - Indicates which methods should be executed when an object is used as a callback. ``` class Audit def before(caller) puts 'Audit: before is called' end def before_save(caller) puts 'Audit: before_save is called' end end class Account include ActiveSupport::Callbacks define_callbacks :save set_callback :save, :before, Audit.new def save run_callbacks :save do puts 'save in main' end end end ``` In the above case whenever you save an account the method `Audit#before` will be called. On the other hand ``` define_callbacks :save, scope: [:kind, :name] ``` would trigger `Audit#before_save` instead. That's constructed by calling `#{kind}_#{name}` on the given instance. In this case “kind” is “before” and “name” is “save”. In this context `:kind` and `:name` have special meanings: `:kind` refers to the kind of callback (before/after/around) and `:name` refers to the method on which callbacks are being defined. A declaration like ``` define_callbacks :save, scope: [:name] ``` would call `Audit#save`. ##### Notes `names` passed to `define_callbacks` must not end with `!`, `?` or `=`. Calling `define_callbacks` multiple times with the same `names` will overwrite previous callbacks registered with `set_callback`. reset\_callbacks(name) Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 827 def reset_callbacks(name) callbacks = get_callbacks name self.descendants.each do |target| chain = target.get_callbacks(name).dup callbacks.each { |c| chain.delete(c) } target.set_callbacks name, chain end set_callbacks(name, callbacks.dup.clear) end ``` Remove all set callbacks for the given event. set\_callback(name, \*filter\_list, &block) Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 756 def set_callback(name, *filter_list, &block) type, filters, options = normalize_callback_params(filter_list, block) self_chain = get_callbacks name mapped = filters.map do |filter| Callback.build(self_chain, filter, type, options) end __update_callbacks(name) do |target, chain| options[:prepend] ? chain.prepend(*mapped) : chain.append(*mapped) target.set_callbacks name, chain end end ``` Install a callback for the given event. ``` set_callback :save, :before, :before_method set_callback :save, :after, :after_method, if: :condition set_callback :save, :around, ->(r, block) { stuff; result = block.call; stuff } ``` The second argument indicates whether the callback is to be run `:before`, `:after`, or `:around` the event. If omitted, `:before` is assumed. This means the first example above can also be written as: ``` set_callback :save, :before_method ``` The callback can be specified as a symbol naming an instance method; as a proc, lambda, or block; or as an object that responds to a certain method determined by the `:scope` argument to `define_callbacks`. If a proc, lambda, or block is given, its body is evaluated in the context of the current object. It can also optionally accept the current object as an argument. Before and around callbacks are called in the order that they are set; after callbacks are called in the reverse order. Around callbacks can access the return value from the event, if it wasn't halted, from the `yield` call. ##### Options * `:if` - A symbol or an array of symbols, each naming an instance method or a proc; the callback will be called only when they all return a true value. If a proc is given, its body is evaluated in the context of the current object. It can also optionally accept the current object as an argument. * `:unless` - A symbol or an array of symbols, each naming an instance method or a proc; the callback will be called only when they all return a false value. If a proc is given, its body is evaluated in the context of the current object. It can also optionally accept the current object as an argument. * `:prepend` - If `true`, the callback will be prepended to the existing chain rather than appended. skip\_callback(name, \*filter\_list, &block) Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 802 def skip_callback(name, *filter_list, &block) type, filters, options = normalize_callback_params(filter_list, block) options[:raise] = true unless options.key?(:raise) __update_callbacks(name) do |target, chain| filters.each do |filter| callback = chain.find { |c| c.matches?(type, filter) } if !callback && options[:raise] raise ArgumentError, "#{type.to_s.capitalize} #{name} callback #{filter.inspect} has not been defined" end if callback && (options.key?(:if) || options.key?(:unless)) new_callback = callback.merge_conditional_options(chain, if_option: options[:if], unless_option: options[:unless]) chain.insert(chain.index(callback), new_callback) end chain.delete(callback) end target.set_callbacks name, chain end end ``` Skip a previously set callback. Like `set_callback`, `:if` or `:unless` options may be passed in order to control when the callback is skipped. ``` class Writer < PersonRecord attr_accessor :age skip_callback :save, :before, :saving_message, if: -> { age > 18 } end ``` When if option returns true, callback is skipped. ``` writer = Writer.new writer.age = 20 writer.save ``` Output: ``` - save saved ``` When if option returns false, callback is NOT skipped. ``` young_writer = Writer.new young_writer.age = 17 young_writer.save ``` Output: ``` saving... - save saved ``` An `ArgumentError` will be raised if the callback has not already been set (unless the `:raise` option is set to `false`).
programming_docs
rails class ActiveSupport::Callbacks::CallTemplate::MethodCall class ActiveSupport::Callbacks::CallTemplate::MethodCall ========================================================= Parent: [Object](../../../object) new(method) Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 377 def initialize(method) @method_name = method end ``` expand(target, value, block) Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 394 def expand(target, value, block) [target, block, @method_name] end ``` Return the parts needed to make this call, with the given input values. Returns an array of the form: ``` [target, block, method, *arguments] ``` This array can be used as such: ``` target.send(method, *arguments, &block) ``` The actual invocation is left up to the caller to minimize call stack pollution. inverted\_lambda() Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 404 def inverted_lambda lambda do |target, value, &block| !target.send(@method_name, &block) end end ``` make\_lambda() Show source ``` # File activesupport/lib/active_support/callbacks.rb, line 398 def make_lambda lambda do |target, value, &block| target.send(@method_name, &block) end end ``` rails class ActiveSupport::SafeBuffer::SafeConcatError class ActiveSupport::SafeBuffer::SafeConcatError ================================================= Parent: StandardError Raised when `ActiveSupport::SafeBuffer#safe_concat` is called on unsafe buffers. new() Show source ``` # File activesupport/lib/active_support/core_ext/string/output_safety.rb, line 148 def initialize super "Could not concatenate to the buffer because it is not html safe." end ``` Calls superclass method rails module ActiveSupport::LogSubscriber::TestHelper module ActiveSupport::LogSubscriber::TestHelper ================================================ Provides some helpers to deal with testing log subscribers by setting up notifications. Take for instance Active Record subscriber tests: ``` class SyncLogSubscriberTest < ActiveSupport::TestCase include ActiveSupport::LogSubscriber::TestHelper setup do ActiveRecord::LogSubscriber.attach_to(:active_record) end def test_basic_query_logging Developer.all.to_a wait assert_equal 1, @logger.logged(:debug).size assert_match(/Developer Load/, @logger.logged(:debug).last) assert_match(/SELECT \* FROM "developers"/, @logger.logged(:debug).last) end end ``` All you need to do is to ensure that your log subscriber is added to Rails::Subscriber, as in the second line of the code above. The test helpers are responsible for setting up the queue, subscriptions and turning colors in logs off. The messages are available in the @logger instance, which is a logger with limited powers (it actually does not send anything to your output), and you can collect them doing @logger.logged(level), where level is the level used in logging, like info, debug, warn and so on. set\_logger(logger) Show source ``` # File activesupport/lib/active_support/log_subscriber/test_helper.rb, line 101 def set_logger(logger) ActiveSupport::LogSubscriber.logger = logger end ``` Overwrite if you use another logger in your log subscriber. ``` def logger ActiveRecord::Base.logger = @logger end ``` wait() Show source ``` # File activesupport/lib/active_support/log_subscriber/test_helper.rb, line 92 def wait @notifier.wait end ``` Wait notifications to be published. rails module ActiveSupport::Rescuable::ClassMethods module ActiveSupport::Rescuable::ClassMethods ============================================== rescue\_from(\*klasses, with: nil, &block) Show source ``` # File activesupport/lib/active_support/rescuable.rb, line 51 def rescue_from(*klasses, with: nil, &block) unless with if block_given? with = block else raise ArgumentError, "Need a handler. Pass the with: keyword argument or provide a block." end end klasses.each do |klass| key = if klass.is_a?(Module) && klass.respond_to?(:===) klass.name elsif klass.is_a?(String) klass else raise ArgumentError, "#{klass.inspect} must be an Exception class or a String referencing an Exception class" end # Put the new handler at the end because the list is read in reverse. self.rescue_handlers += [[key, with]] end end ``` Registers exception classes with a handler to be called by `rescue_with_handler`. `rescue_from` receives a series of exception classes or class names, and an exception handler specified by a trailing `:with` option containing the name of a method or a Proc object. Alternatively, a block can be given as the handler. Handlers that take one argument will be called with the exception, so that the exception can be inspected when dealing with it. Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which `exception.is_a?(klass)` holds true is the one invoked, if any. ``` class ApplicationController < ActionController::Base rescue_from User::NotAuthorized, with: :deny_access # self defined exception rescue_from ActiveRecord::RecordInvalid, with: :show_errors rescue_from 'MyAppError::Base' do |exception| render xml: exception, status: 500 end private def deny_access ... end def show_errors(exception) exception.record.new_record? ? ... end end ``` Exceptions raised inside exception handlers are not propagated up. rescue\_with\_handler(exception, object: self, visited\_exceptions: []) Show source ``` # File activesupport/lib/active_support/rescuable.rb, line 88 def rescue_with_handler(exception, object: self, visited_exceptions: []) visited_exceptions << exception if handler = handler_for_rescue(exception, object: object) handler.call exception exception elsif exception if visited_exceptions.include?(exception.cause) nil else rescue_with_handler(exception.cause, object: object, visited_exceptions: visited_exceptions) end end end ``` Matches an exception to a handler based on the exception class. If no handler matches the exception, check for a handler matching the (optional) exception.cause. If no handler matches the exception or its cause, this returns `nil`, so you can deal with unhandled exceptions. Be sure to re-raise unhandled exceptions if this is what you expect. ``` begin … rescue => exception rescue_with_handler(exception) || raise end ``` Returns the exception if it was handled and `nil` if it was not. rails class ActiveSupport::Inflector::Inflections class ActiveSupport::Inflector::Inflections ============================================ Parent: [Object](../../object) A singleton instance of this class is yielded by [`Inflector.inflections`](../inflector#method-i-inflections), which can then be used to specify additional inflection rules. If passed an optional locale, rules for other languages can be specified. The default locale is `:en`. Only rules for English are provided. ``` ActiveSupport::Inflector.inflections(:en) do |inflect| inflect.plural /^(ox)$/i, '\1\2en' inflect.singular /^(ox)en/i, '\1' inflect.irregular 'cactus', 'cacti' inflect.uncountable 'equipment' end ``` New rules are added at the top. So in the example above, the irregular rule for cactus will now be the first of the pluralization and singularization rules that is runs. This guarantees that your rules run before any of the rules that may already have been loaded. acronyms[R] humans[R] plurals[R] singulars[R] uncountables[R] instance(locale = :en) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 63 def self.instance(locale = :en) @__instance__[locale] ||= new end ``` instance\_or\_fallback(locale) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 67 def self.instance_or_fallback(locale) I18n.fallbacks[locale].each do |k| return @__instance__[k] if @__instance__.key?(k) end instance(locale) end ``` new() Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 78 def initialize @plurals, @singulars, @uncountables, @humans, @acronyms = [], [], Uncountables.new, [], {} define_acronym_regex_patterns end ``` acronym(word) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 140 def acronym(word) @acronyms[word.downcase] = word define_acronym_regex_patterns end ``` Specifies a new acronym. An acronym must be specified as it will appear in a camelized string. An underscore string that contains the acronym will retain the acronym when passed to `camelize`, `humanize`, or `titleize`. A camelized string that contains the acronym will maintain the acronym when titleized or humanized, and will convert the acronym into a non-delimited single lowercase word when passed to `underscore`. ``` acronym 'HTML' titleize 'html' # => 'HTML' camelize 'html' # => 'HTML' underscore 'MyHTML' # => 'my_html' ``` The acronym, however, must occur as a delimited unit and not be part of another word for conversions to recognize it: ``` acronym 'HTTP' camelize 'my_http_delimited' # => 'MyHTTPDelimited' camelize 'https' # => 'Https', not 'HTTPs' underscore 'HTTPS' # => 'http_s', not 'https' acronym 'HTTPS' camelize 'https' # => 'HTTPS' underscore 'HTTPS' # => 'https' ``` Note: Acronyms that are passed to `pluralize` will no longer be recognized, since the acronym will not occur as a delimited unit in the pluralized result. To work around this, you must specify the pluralized form as an acronym as well: ``` acronym 'API' camelize(pluralize('api')) # => 'Apis' acronym 'APIs' camelize(pluralize('api')) # => 'APIs' ``` `acronym` may be used to specify any word that contains an acronym or otherwise needs to maintain a non-standard capitalization. The only restriction is that the word must begin with a capital letter. ``` acronym 'RESTful' underscore 'RESTful' # => 'restful' underscore 'RESTfulController' # => 'restful_controller' titleize 'RESTfulController' # => 'RESTful Controller' camelize 'restful' # => 'RESTful' camelize 'restful_controller' # => 'RESTfulController' acronym 'McDonald' underscore 'McDonald' # => 'mcdonald' camelize 'mcdonald' # => 'McDonald' ``` clear(scope = :all) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 229 def clear(scope = :all) case scope when :all clear(:acronyms) clear(:plurals) clear(:singulars) clear(:uncountables) clear(:humans) when :acronyms @acronyms = {} define_acronym_regex_patterns when :uncountables @uncountables = Uncountables.new when :plurals, :singulars, :humans instance_variable_set "@#{scope}", [] end end ``` Clears the loaded inflections within a given scope (default is `:all`). Give the scope as a symbol of the inflection type, the options are: `:plurals`, `:singulars`, `:uncountables`, `:humans`, `:acronyms`. ``` clear :all clear :plurals ``` human(rule, replacement) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 218 def human(rule, replacement) @humans.prepend([rule, replacement]) end ``` Specifies a humanized form of a string by a regular expression rule or by a string mapping. When using a regular expression based replacement, the normal humanize formatting is called after the replacement. When a string is used, the human form should be specified as desired (example: 'The name', not 'the\_name'). ``` human /_cnt$/i, '\1_count' human 'legacy_col_person_name', 'Name' ``` irregular(singular, plural) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 172 def irregular(singular, plural) @uncountables.delete(singular) @uncountables.delete(plural) s0 = singular[0] srest = singular[1..-1] p0 = plural[0] prest = plural[1..-1] if s0.upcase == p0.upcase plural(/(#{s0})#{srest}$/i, '\1' + prest) plural(/(#{p0})#{prest}$/i, '\1' + prest) singular(/(#{s0})#{srest}$/i, '\1' + srest) singular(/(#{p0})#{prest}$/i, '\1' + srest) else plural(/#{s0.upcase}(?i)#{srest}$/, p0.upcase + prest) plural(/#{s0.downcase}(?i)#{srest}$/, p0.downcase + prest) plural(/#{p0.upcase}(?i)#{prest}$/, p0.upcase + prest) plural(/#{p0.downcase}(?i)#{prest}$/, p0.downcase + prest) singular(/#{s0.upcase}(?i)#{srest}$/, s0.upcase + srest) singular(/#{s0.downcase}(?i)#{srest}$/, s0.downcase + srest) singular(/#{p0.upcase}(?i)#{prest}$/, s0.upcase + srest) singular(/#{p0.downcase}(?i)#{prest}$/, s0.downcase + srest) end end ``` Specifies a new irregular that applies to both pluralization and singularization at the same time. This can only be used for strings, not regular expressions. You simply pass the irregular in singular and plural form. ``` irregular 'cactus', 'cacti' irregular 'person', 'people' ``` plural(rule, replacement) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 149 def plural(rule, replacement) @uncountables.delete(rule) if rule.is_a?(String) @uncountables.delete(replacement) @plurals.prepend([rule, replacement]) end ``` Specifies a new pluralization rule and its replacement. The rule can either be a string or a regular expression. The replacement should always be a string that may include references to the matched data from the rule. singular(rule, replacement) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 159 def singular(rule, replacement) @uncountables.delete(rule) if rule.is_a?(String) @uncountables.delete(replacement) @singulars.prepend([rule, replacement]) end ``` Specifies a new singularization rule and its replacement. The rule can either be a string or a regular expression. The replacement should always be a string that may include references to the matched data from the rule. uncountable(\*words) Show source ``` # File activesupport/lib/active_support/inflector/inflections.rb, line 206 def uncountable(*words) @uncountables.add(words) end ``` Specifies words that are uncountable and should not be inflected. ``` uncountable 'money' uncountable 'money', 'information' uncountable %w( money information rice ) ``` rails module ActiveSupport::Dependencies::RequireDependency module ActiveSupport::Dependencies::RequireDependency ====================================================== require\_dependency(filename) Show source ``` # File activesupport/lib/active_support/dependencies/require_dependency.rb, line 11 def require_dependency(filename) filename = filename.to_path if filename.respond_to?(:to_path) unless filename.is_a?(String) raise ArgumentError, "the file name must be either a String or implement #to_path -- you passed #{filename.inspect}" end if abspath = ActiveSupport::Dependencies.search_for_file(filename) require abspath else require filename end end ``` **Warning:** This method is obsolete. The semantics of the autoloader match Ruby's and you do not need to be defensive with load order anymore. Just refer to classes and modules normally. Engines that do not control the mode in which their parent application runs should call `require_dependency` where needed in case the runtime mode is `:classic`. rails module ActiveSupport::ActionableError::ClassMethods module ActiveSupport::ActionableError::ClassMethods ==================================================== action(name, &block) Show source ``` # File activesupport/lib/active_support/actionable_error.rb, line 43 def action(name, &block) _actions[name] = block end ``` Defines an action that can resolve the error. ``` class PendingMigrationError < MigrationError include ActiveSupport::ActionableError action "Run pending migrations" do ActiveRecord::Tasks::DatabaseTasks.migrate end end ``` rails class ActiveSupport::Configurable::Configuration class ActiveSupport::Configurable::Configuration ================================================= Parent: [ActiveSupport::InheritableOptions](../inheritableoptions) compile\_methods!(keys) Show source ``` # File activesupport/lib/active_support/configurable.rb, line 18 def self.compile_methods!(keys) keys.reject { |m| method_defined?(m) }.each do |key| class_eval <<-RUBY, __FILE__, __LINE__ + 1 def #{key}; _get(#{key.inspect}); end RUBY end end ``` Compiles reader methods so we don't have to go through method\_missing. compile\_methods!() Show source ``` # File activesupport/lib/active_support/configurable.rb, line 13 def compile_methods! self.class.compile_methods!(keys) end ``` rails module ActiveSupport::Configurable::ClassMethods module ActiveSupport::Configurable::ClassMethods ================================================= config() Show source ``` # File activesupport/lib/active_support/configurable.rb, line 28 def config @_config ||= if respond_to?(:superclass) && superclass.respond_to?(:config) superclass.config.inheritable_copy else # create a new "anonymous" class that will host the compiled reader methods Class.new(Configuration).new end end ``` configure() { |config| ... } Show source ``` # File activesupport/lib/active_support/configurable.rb, line 37 def configure yield config end ``` config\_accessor(\*names, instance\_reader: true, instance\_writer: true, instance\_accessor: true, default: nil) { |: default)| ... } Show source ``` # File activesupport/lib/active_support/configurable.rb, line 109 def config_accessor(*names, instance_reader: true, instance_writer: true, instance_accessor: true, default: nil) # :doc: names.each do |name| raise NameError.new("invalid config attribute name") unless /\A[_A-Za-z]\w*\z/.match?(name) reader, reader_line = "def #{name}; config.#{name}; end", __LINE__ writer, writer_line = "def #{name}=(value); config.#{name} = value; end", __LINE__ singleton_class.class_eval reader, __FILE__, reader_line singleton_class.class_eval writer, __FILE__, writer_line if instance_accessor class_eval reader, __FILE__, reader_line if instance_reader class_eval writer, __FILE__, writer_line if instance_writer end send("#{name}=", block_given? ? yield : default) end end ``` Allows you to add shortcut so that you don't have to refer to attribute through config. Also look at the example for config to contrast. Defines both class and instance config accessors. ``` class User include ActiveSupport::Configurable config_accessor :allowed_access end User.allowed_access # => nil User.allowed_access = false User.allowed_access # => false user = User.new user.allowed_access # => false user.allowed_access = true user.allowed_access # => true User.allowed_access # => false ``` The attribute name must be a valid method name in Ruby. ``` class User include ActiveSupport::Configurable config_accessor :"1_Badname" end # => NameError: invalid config attribute name ``` To omit the instance writer method, pass `instance_writer: false`. To omit the instance reader method, pass `instance_reader: false`. ``` class User include ActiveSupport::Configurable config_accessor :allowed_access, instance_reader: false, instance_writer: false end User.allowed_access = false User.allowed_access # => false User.new.allowed_access = true # => NoMethodError User.new.allowed_access # => NoMethodError ``` Or pass `instance_accessor: false`, to omit both instance methods. ``` class User include ActiveSupport::Configurable config_accessor :allowed_access, instance_accessor: false end User.allowed_access = false User.allowed_access # => false User.new.allowed_access = true # => NoMethodError User.new.allowed_access # => NoMethodError ``` Also you can pass `default` or a block to set up the attribute with a default value. ``` class User include ActiveSupport::Configurable config_accessor :allowed_access, default: false config_accessor :hair_colors do [:brown, :black, :blonde, :red] end end User.allowed_access # => false User.hair_colors # => [:brown, :black, :blonde, :red] ```
programming_docs
rails module ActiveSupport::Testing::Assertions module ActiveSupport::Testing::Assertions ========================================== assert\_changes(expression, message = nil, from: UNTRACKED, to: UNTRACKED, &block) Show source ``` # File activesupport/lib/active_support/testing/assertions.rb, line 175 def assert_changes(expression, message = nil, from: UNTRACKED, to: UNTRACKED, &block) exp = expression.respond_to?(:call) ? expression : -> { eval(expression.to_s, block.binding) } before = exp.call retval = _assert_nothing_raised_or_warn("assert_changes", &block) unless from == UNTRACKED error = "Expected change from #{from.inspect}" error = "#{message}.\n#{error}" if message assert from === before, error end after = exp.call error = "#{expression.inspect} didn't change" error = "#{error}. It was already #{to}" if before == to error = "#{message}.\n#{error}" if message refute_equal before, after, error unless to == UNTRACKED error = "Expected change to #{to}\n" error = "#{message}.\n#{error}" if message assert to === after, error end retval end ``` Assertion that the result of evaluating an expression is changed before and after invoking the passed in block. ``` assert_changes 'Status.all_good?' do post :create, params: { status: { ok: false } } end ``` You can pass the block as a string to be evaluated in the context of the block. A lambda can be passed for the block as well. ``` assert_changes -> { Status.all_good? } do post :create, params: { status: { ok: false } } end ``` The assertion is useful to test side effects. The passed block can be anything that can be converted to string with to\_s. ``` assert_changes :@object do @object = 42 end ``` The keyword arguments :from and :to can be given to specify the expected initial value and the expected value after the block was executed. ``` assert_changes :@object, from: nil, to: :foo do @object = :foo end ``` An error message can be specified. ``` assert_changes -> { Status.all_good? }, 'Expected the status to be bad' do post :create, params: { status: { incident: true } } end ``` assert\_difference(expression, \*args, &block) Show source ``` # File activesupport/lib/active_support/testing/assertions.rb, line 86 def assert_difference(expression, *args, &block) expressions = if expression.is_a?(Hash) message = args[0] expression else difference = args[0] || 1 message = args[1] Array(expression).index_with(difference) end exps = expressions.keys.map { |e| e.respond_to?(:call) ? e : lambda { eval(e, block.binding) } } before = exps.map(&:call) retval = _assert_nothing_raised_or_warn("assert_difference", &block) expressions.zip(exps, before) do |(code, diff), exp, before_value| error = "#{code.inspect} didn't change by #{diff}" error = "#{message}.\n#{error}" if message assert_equal(before_value + diff, exp.call, error) end retval end ``` Test numeric difference between the return value of an expression as a result of what is evaluated in the yielded block. ``` assert_difference 'Article.count' do post :create, params: { article: {...} } end ``` An arbitrary expression is passed in and evaluated. ``` assert_difference 'Article.last.comments(:reload).size' do post :create, params: { comment: {...} } end ``` An arbitrary positive or negative difference can be specified. The default is `1`. ``` assert_difference 'Article.count', -1 do post :delete, params: { id: ... } end ``` An array of expressions can also be passed in and evaluated. ``` assert_difference [ 'Article.count', 'Post.count' ], 2 do post :create, params: { article: {...} } end ``` A hash of expressions/numeric differences can also be passed in and evaluated. ``` assert_difference ->{ Article.count } => 1, ->{ Notification.count } => 2 do post :create, params: { article: {...} } end ``` A lambda or a list of lambdas can be passed in and evaluated: ``` assert_difference ->{ Article.count }, 2 do post :create, params: { article: {...} } end assert_difference [->{ Article.count }, ->{ Post.count }], 2 do post :create, params: { article: {...} } end ``` An error message can be specified. ``` assert_difference 'Article.count', -1, 'An Article should be destroyed' do post :delete, params: { id: ... } end ``` assert\_no\_changes(expression, message = nil, from: UNTRACKED, &block) Show source ``` # File activesupport/lib/active_support/testing/assertions.rb, line 222 def assert_no_changes(expression, message = nil, from: UNTRACKED, &block) exp = expression.respond_to?(:call) ? expression : -> { eval(expression.to_s, block.binding) } before = exp.call retval = _assert_nothing_raised_or_warn("assert_no_changes", &block) unless from == UNTRACKED error = "Expected initial value of #{from.inspect}" error = "#{message}.\n#{error}" if message assert from === before, error end after = exp.call error = "#{expression.inspect} changed" error = "#{message}.\n#{error}" if message if before.nil? assert_nil after, error else assert_equal before, after, error end retval end ``` Assertion that the result of evaluating an expression is not changed before and after invoking the passed in block. ``` assert_no_changes 'Status.all_good?' do post :create, params: { status: { ok: true } } end ``` Provide the optional keyword argument :from to specify the expected initial value. ``` assert_no_changes -> { Status.all_good? }, from: true do post :create, params: { status: { ok: true } } end ``` An error message can be specified. ``` assert_no_changes -> { Status.all_good? }, 'Expected the status to be good' do post :create, params: { status: { ok: false } } end ``` assert\_no\_difference(expression, message = nil, &block) Show source ``` # File activesupport/lib/active_support/testing/assertions.rb, line 137 def assert_no_difference(expression, message = nil, &block) assert_difference expression, 0, message, &block end ``` Assertion that the numeric result of evaluating an expression is not changed before and after invoking the passed in block. ``` assert_no_difference 'Article.count' do post :create, params: { article: invalid_attributes } end ``` A lambda can be passed in and evaluated. ``` assert_no_difference -> { Article.count } do post :create, params: { article: invalid_attributes } end ``` An error message can be specified. ``` assert_no_difference 'Article.count', 'An Article should not be created' do post :create, params: { article: invalid_attributes } end ``` An array of expressions can also be passed in and evaluated. ``` assert_no_difference [ 'Article.count', -> { Post.count } ] do post :create, params: { article: invalid_attributes } end ``` assert\_not(object, message = nil) Show source ``` # File activesupport/lib/active_support/testing/assertions.rb, line 21 def assert_not(object, message = nil) message ||= "Expected #{mu_pp(object)} to be nil or false" assert !object, message end ``` Asserts that an expression is not truthy. Passes if `object` is `nil` or `false`. “Truthy” means “considered true in a conditional” like `if foo`. ``` assert_not nil # => true assert_not false # => true assert_not 'foo' # => Expected "foo" to be nil or false ``` An error message can be specified. ``` assert_not foo, 'foo should be false' ``` assert\_nothing\_raised() { || ... } Show source ``` # File activesupport/lib/active_support/testing/assertions.rb, line 33 def assert_nothing_raised yield rescue => error raise Minitest::UnexpectedError.new(error) end ``` Assertion that the block should not raise an exception. Passes if evaluated code in the yielded block raises no exception. ``` assert_nothing_raised do perform_service(param: 'no_exception') end ``` rails module ActiveSupport::Testing::TimeHelpers module ActiveSupport::Testing::TimeHelpers =========================================== Contains helpers that help you test passage of time. after\_teardown() Show source ``` # File activesupport/lib/active_support/testing/time_helpers.rb, line 70 def after_teardown travel_back super end ``` Calls superclass method freeze\_time(&block) Show source ``` # File activesupport/lib/active_support/testing/time_helpers.rb, line 234 def freeze_time(&block) travel_to Time.now, &block end ``` Calls `travel_to` with `Time.now`. ``` Time.current # => Sun, 09 Jul 2017 15:34:49 EST -05:00 freeze_time sleep(1) Time.current # => Sun, 09 Jul 2017 15:34:49 EST -05:00 ``` This method also accepts a block, which will return the current time back to its original state at the end of the block: ``` Time.current # => Sun, 09 Jul 2017 15:34:49 EST -05:00 freeze_time do sleep(1) User.create.created_at # => Sun, 09 Jul 2017 15:34:49 EST -05:00 end Time.current # => Sun, 09 Jul 2017 15:34:50 EST -05:00 ``` travel(duration, &block) Show source ``` # File activesupport/lib/active_support/testing/time_helpers.rb, line 93 def travel(duration, &block) travel_to Time.now + duration, &block end ``` Changes current time to the time in the future or in the past by a given time difference by stubbing `Time.now`, `Date.today`, and `DateTime.now`. The stubs are automatically removed at the end of the test. ``` Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 travel 1.day Time.current # => Sun, 10 Nov 2013 15:34:49 EST -05:00 Date.current # => Sun, 10 Nov 2013 DateTime.current # => Sun, 10 Nov 2013 15:34:49 -0500 ``` This method also accepts a block, which will return the current time back to its original state at the end of the block: ``` Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 travel 1.day do User.create.created_at # => Sun, 10 Nov 2013 15:34:49 EST -05:00 end Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 ``` travel\_back() { || ... } Show source ``` # File activesupport/lib/active_support/testing/time_helpers.rb, line 208 def travel_back stubbed_time = Time.current if block_given? && simple_stubs.stubbed? simple_stubs.unstub_all! yield if block_given? ensure travel_to stubbed_time if stubbed_time end ``` Returns the current time back to its original state, by removing the stubs added by `travel`, `travel_to`, and `freeze_time`. ``` Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 travel_to Time.zone.local(2004, 11, 24, 1, 4, 44) Time.current # => Wed, 24 Nov 2004 01:04:44 EST -05:00 travel_back Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 ``` This method also accepts a block, which brings the stubs back at the end of the block: ``` Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 travel_to Time.zone.local(2004, 11, 24, 1, 4, 44) Time.current # => Wed, 24 Nov 2004 01:04:44 EST -05:00 travel_back do Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 end Time.current # => Wed, 24 Nov 2004 01:04:44 EST -05:00 ``` Also aliased as: [unfreeze\_time](timehelpers#method-i-unfreeze_time) travel\_to(date\_or\_time) { || ... } Show source ``` # File activesupport/lib/active_support/testing/time_helpers.rb, line 128 def travel_to(date_or_time) if block_given? && in_block travel_to_nested_block_call = <<~MSG Calling `travel_to` with a block, when we have previously already made a call to `travel_to`, can lead to confusing time stubbing. Instead of: travel_to 2.days.from_now do # 2 days from today travel_to 3.days.from_now do # 5 days from today end end preferred way to achieve above is: travel 2.days do # 2 days from today end travel 5.days do # 5 days from today end MSG raise travel_to_nested_block_call end if date_or_time.is_a?(Date) && !date_or_time.is_a?(DateTime) now = date_or_time.midnight.to_time elsif date_or_time.is_a?(String) now = Time.zone.parse(date_or_time) else now = date_or_time.to_time.change(usec: 0) end stubbed_time = Time.now if simple_stubs.stubbing(Time, :now) simple_stubs.stub_object(Time, :now) { at(now.to_i) } simple_stubs.stub_object(Date, :today) { jd(now.to_date.jd) } simple_stubs.stub_object(DateTime, :now) { jd(now.to_date.jd, now.hour, now.min, now.sec, Rational(now.utc_offset, 86400)) } if block_given? begin self.in_block = true yield ensure if stubbed_time travel_to stubbed_time else travel_back end self.in_block = false end end end ``` Changes current time to the given time by stubbing `Time.now`, `Date.today`, and `DateTime.now` to return the time or date passed into this method. The stubs are automatically removed at the end of the test. ``` Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 travel_to Time.zone.local(2004, 11, 24, 1, 4, 44) Time.current # => Wed, 24 Nov 2004 01:04:44 EST -05:00 Date.current # => Wed, 24 Nov 2004 DateTime.current # => Wed, 24 Nov 2004 01:04:44 -0500 ``` Dates are taken as their timestamp at the beginning of the day in the application time zone. `Time.current` returns said timestamp, and `Time.now` its equivalent in the system time zone. Similarly, `Date.current` returns a date equal to the argument, and `Date.today` the date according to `Time.now`, which may be different. (Note that you rarely want to deal with `Time.now`, or `Date.today`, in order to honor the application time zone please always use `Time.current` and `Date.current`.) Note that the usec for the time passed will be set to 0 to prevent rounding errors with external services, like MySQL (which will round instead of floor, leading to off-by-one-second errors). This method also accepts a block, which will return the current time back to its original state at the end of the block: ``` Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 travel_to Time.zone.local(2004, 11, 24, 1, 4, 44) do Time.current # => Wed, 24 Nov 2004 01:04:44 EST -05:00 end Time.current # => Sat, 09 Nov 2013 15:34:49 EST -05:00 ``` unfreeze\_time() Alias for: [travel\_back](timehelpers#method-i-travel_back) rails module ActiveSupport::Testing::Deprecation module ActiveSupport::Testing::Deprecation =========================================== assert\_deprecated(match = nil, deprecator = nil, &block) Show source ``` # File activesupport/lib/active_support/testing/deprecation.rb, line 31 def assert_deprecated(match = nil, deprecator = nil, &block) result, warnings = collect_deprecations(deprecator, &block) assert !warnings.empty?, "Expected a deprecation warning within the block but received none" if match match = Regexp.new(Regexp.escape(match)) unless match.is_a?(Regexp) assert warnings.any? { |w| match.match?(w) }, "No deprecation warning matched #{match}: #{warnings.join(', ')}" end result end ``` Asserts that a matching deprecation warning was emitted by the given deprecator during the execution of the yielded block. ``` assert_deprecated(/foo/, CustomDeprecator) do CustomDeprecator.warn "foo should no longer be used" end ``` The `match` object may be a `Regexp`, or `String` appearing in the message. ``` assert_deprecated('foo', CustomDeprecator) do CustomDeprecator.warn "foo should no longer be used" end ``` If the `match` is omitted (or explicitly `nil`), any deprecation warning will match. ``` assert_deprecated(nil, CustomDeprecator) do CustomDeprecator.warn "foo should no longer be used" end ``` If no `deprecator` is given, defaults to [`ActiveSupport::Deprecation`](../deprecation). ``` assert_deprecated do ActiveSupport::Deprecation.warn "foo should no longer be used" end ``` assert\_not\_deprecated(deprecator = nil, &block) Show source ``` # File activesupport/lib/active_support/testing/deprecation.rb, line 56 def assert_not_deprecated(deprecator = nil, &block) result, deprecations = collect_deprecations(deprecator, &block) assert deprecations.empty?, "Expected no deprecation warning within the block but received #{deprecations.size}: \n #{deprecations * "\n "}" result end ``` Asserts that no deprecation warnings are emitted by the given deprecator during the execution of the yielded block. ``` assert_not_deprecated(CustomDeprecator) do CustomDeprecator.warn "message" # fails assertion end ``` If no `deprecator` is given, defaults to [`ActiveSupport::Deprecation`](../deprecation). ``` assert_not_deprecated do ActiveSupport::Deprecation.warn "message" # fails assertion end assert_not_deprecated do CustomDeprecator.warn "message" # passes assertion end ``` collect\_deprecations(deprecator = nil) { || ... } Show source ``` # File activesupport/lib/active_support/testing/deprecation.rb, line 75 def collect_deprecations(deprecator = nil) deprecator ||= ActiveSupport::Deprecation old_behavior = deprecator.behavior deprecations = [] deprecator.behavior = Proc.new do |message, callstack| deprecations << message end result = yield [result, deprecations] ensure deprecator.behavior = old_behavior end ``` Returns an array of all the deprecation warnings emitted by the given `deprecator` during the execution of the yielded block. ``` collect_deprecations(CustomDeprecator) do CustomDeprecator.warn "message" end # => ["message"] ``` If no `deprecator` is given, defaults to [`ActiveSupport::Deprecation`](../deprecation). ``` collect_deprecations do CustomDeprecator.warn "custom message" ActiveSupport::Deprecation.warn "message" end # => ["message"] ``` rails module ActiveSupport::Testing::FileFixtures module ActiveSupport::Testing::FileFixtures ============================================ Adds simple access to sample files called file fixtures. [`File`](../../file) fixtures are normal files stored in `ActiveSupport::TestCase.file_fixture_path`. [`File`](../../file) fixtures are represented as `Pathname` objects. This makes it easy to extract specific information: ``` file_fixture("example.txt").read # get the file's content file_fixture("example.mp3").size # get the file size ``` file\_fixture(fixture\_name) Show source ``` # File activesupport/lib/active_support/testing/file_fixtures.rb, line 26 def file_fixture(fixture_name) path = Pathname.new(File.join(file_fixture_path, fixture_name)) if path.exist? path else msg = "the directory '%s' does not contain a file named '%s'" raise ArgumentError, msg % [file_fixture_path, fixture_name] end end ``` Returns a `Pathname` to the fixture file named `fixture_name`. Raises `ArgumentError` if `fixture_name` can't be found. rails module ActiveSupport::Testing::SetupAndTeardown module ActiveSupport::Testing::SetupAndTeardown ================================================ Included modules: [ActiveSupport::Callbacks](../callbacks) Adds support for `setup` and `teardown` callbacks. These callbacks serve as a replacement to overwriting the `#setup` and `#teardown` methods of your [`TestCase`](../testcase). ``` class ExampleTest < ActiveSupport::TestCase setup do # ... end teardown do # ... end end ``` prepended(klass) Show source ``` # File activesupport/lib/active_support/testing/setup_and_teardown.rb, line 21 def self.prepended(klass) klass.include ActiveSupport::Callbacks klass.define_callbacks :setup, :teardown klass.extend ClassMethods end ``` rails module ActiveSupport::Testing::ConstantLookup module ActiveSupport::Testing::ConstantLookup ============================================== Resolves a constant from a minitest spec name. Given the following spec-style test: ``` describe WidgetsController, :index do describe "authenticated user" do describe "returns widgets" do it "has a controller that exists" do assert_kind_of WidgetsController, @controller end end end end ``` The test will have the following name: ``` "WidgetsController::index::authenticated user::returns widgets" ``` The constant WidgetsController can be resolved from the name. The following code will resolve the constant: ``` controller = determine_constant_from_test_name(name) do |constant| Class === constant && constant < ::ActionController::Metal end ```
programming_docs
rails module ActiveSupport::Testing::Declarative module ActiveSupport::Testing::Declarative =========================================== test(name, &block) Show source ``` # File activesupport/lib/active_support/testing/declarative.rb, line 13 def test(name, &block) test_name = "test_#{name.gsub(/\s+/, '_')}".to_sym defined = method_defined? test_name raise "#{test_name} is already defined in #{self}" if defined if block_given? define_method(test_name, &block) else define_method(test_name) do flunk "No implementation provided for #{name}" end end end ``` Helper to define a test method using a [`String`](../../string). Under the hood, it replaces spaces with underscores and defines the test method. ``` test "verify something" do ... end ``` rails module ActiveSupport::Testing::SetupAndTeardown::ClassMethods module ActiveSupport::Testing::SetupAndTeardown::ClassMethods ============================================================== setup(\*args, &block) Show source ``` # File activesupport/lib/active_support/testing/setup_and_teardown.rb, line 29 def setup(*args, &block) set_callback(:setup, :before, *args, &block) end ``` Add a callback, which runs before `TestCase#setup`. teardown(\*args, &block) Show source ``` # File activesupport/lib/active_support/testing/setup_and_teardown.rb, line 34 def teardown(*args, &block) set_callback(:teardown, :after, *args, &block) end ``` Add a callback, which runs after `TestCase#teardown`. rails module ActiveSupport::Testing::Isolation::Subprocess module ActiveSupport::Testing::Isolation::Subprocess ===================================================== ORIG\_ARGV run\_in\_isolation() { || ... } Show source ``` # File activesupport/lib/active_support/testing/isolation.rb, line 68 def run_in_isolation(&blk) require "tempfile" if ENV["ISOLATION_TEST"] yield test_result = defined?(Minitest::Result) ? Minitest::Result.from(self) : dup File.open(ENV["ISOLATION_OUTPUT"], "w") do |file| file.puts [Marshal.dump(test_result)].pack("m") end exit! else Tempfile.open("isolation") do |tmpfile| env = { "ISOLATION_TEST" => self.class.name, "ISOLATION_OUTPUT" => tmpfile.path } test_opts = "-n#{self.class.name}##{name}" load_path_args = [] $-I.each do |p| load_path_args << "-I" load_path_args << File.expand_path(p) end child = IO.popen([env, Gem.ruby, *load_path_args, $0, *ORIG_ARGV, test_opts]) begin Process.wait(child.pid) rescue Errno::ECHILD # The child process may exit before we wait nil end return tmpfile.read.unpack1("m") end end end ``` Complicated H4X to get this working in windows / jruby with no forking. rails class ActiveSupport::Concurrency::LoadInterlockAwareMonitor class ActiveSupport::Concurrency::LoadInterlockAwareMonitor ============================================================ Parent: Monitor A monitor that will permit dependency loading while blocked waiting for the lock. mon\_enter() Show source ``` # File activesupport/lib/active_support/concurrency/load_interlock_aware_monitor.rb, line 15 def mon_enter mon_try_enter || ActiveSupport::Dependencies.interlock.permit_concurrent_loads { super } end ``` Enters an exclusive section, but allows dependency loading while blocked Calls superclass method synchronize(&block) Show source ``` # File activesupport/lib/active_support/concurrency/load_interlock_aware_monitor.rb, line 20 def synchronize(&block) Thread.handle_interrupt(EXCEPTION_NEVER) do mon_enter begin Thread.handle_interrupt(EXCEPTION_IMMEDIATE, &block) ensure mon_exit end end end ``` rails class ActiveSupport::Concurrency::ShareLock class ActiveSupport::Concurrency::ShareLock ============================================ Parent: [Object](../../object) Included modules: A share/exclusive lock, otherwise known as a read/write lock. [en.wikipedia.org/wiki/Readers%E2%80%93writer\_lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) new() Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 50 def initialize super() @cv = new_cond @sharing = Hash.new(0) @waiting = {} @sleeping = {} @exclusive_thread = nil @exclusive_depth = 0 end ``` Calls superclass method exclusive(purpose: nil, compatible: [], after\_compatible: [], no\_wait: false) { || ... } Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 148 def exclusive(purpose: nil, compatible: [], after_compatible: [], no_wait: false) if start_exclusive(purpose: purpose, compatible: compatible, no_wait: no_wait) begin yield ensure stop_exclusive(compatible: after_compatible) end end end ``` Execute the supplied block while holding the Exclusive lock. If `no_wait` is set and the lock is not immediately available, returns `nil` without yielding. Otherwise, returns the result of the block. See `start_exclusive` for other options. sharing() { || ... } Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 159 def sharing start_sharing begin yield ensure stop_sharing end end ``` Execute the supplied block while holding the Share lock. start\_exclusive(purpose: nil, compatible: [], no\_wait: false) Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 76 def start_exclusive(purpose: nil, compatible: [], no_wait: false) synchronize do unless @exclusive_thread == Thread.current if busy_for_exclusive?(purpose) return false if no_wait yield_shares(purpose: purpose, compatible: compatible, block_share: true) do wait_for(:start_exclusive) { busy_for_exclusive?(purpose) } end end @exclusive_thread = Thread.current end @exclusive_depth += 1 true end end ``` Returns false if `no_wait` is set and the lock is not immediately available. Otherwise, returns true after the lock has been acquired. `purpose` and `compatible` work together; while this thread is waiting for the exclusive lock, it will yield its share (if any) to any other attempt whose `purpose` appears in this attempt's `compatible` list. This allows a “loose” upgrade, which, being less strict, prevents some classes of deadlocks. For many resources, loose upgrades are sufficient: if a thread is awaiting a lock, it is not running any other code. With `purpose` matching, it is possible to yield only to other threads whose activity will not interfere. start\_sharing() Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 114 def start_sharing synchronize do if @sharing[Thread.current] > 0 || @exclusive_thread == Thread.current # We already hold a lock; nothing to wait for elsif @waiting[Thread.current] # We're nested inside a +yield_shares+ call: we'll resume as # soon as there isn't an exclusive lock in our way wait_for(:start_sharing) { @exclusive_thread } else # This is an initial / outermost share call: any outstanding # requests for an exclusive lock get to go first wait_for(:start_sharing) { busy_for_sharing?(false) } end @sharing[Thread.current] += 1 end end ``` stop\_exclusive(compatible: []) Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 96 def stop_exclusive(compatible: []) synchronize do raise "invalid unlock" if @exclusive_thread != Thread.current @exclusive_depth -= 1 if @exclusive_depth == 0 @exclusive_thread = nil if eligible_waiters?(compatible) yield_shares(compatible: compatible, block_share: true) do wait_for(:stop_exclusive) { @exclusive_thread || eligible_waiters?(compatible) } end end @cv.broadcast end end end ``` Relinquish the exclusive lock. Must only be called by the thread that called [`start_exclusive`](sharelock#method-i-start_exclusive) (and currently holds the lock). stop\_sharing() Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 131 def stop_sharing synchronize do if @sharing[Thread.current] > 1 @sharing[Thread.current] -= 1 else @sharing.delete Thread.current @cv.broadcast end end end ``` yield\_shares(purpose: nil, compatible: [], block\_share: false) { || ... } Show source ``` # File activesupport/lib/active_support/concurrency/share_lock.rb, line 171 def yield_shares(purpose: nil, compatible: [], block_share: false) loose_shares = previous_wait = nil synchronize do if loose_shares = @sharing.delete(Thread.current) if previous_wait = @waiting[Thread.current] purpose = nil unless purpose == previous_wait[0] compatible &= previous_wait[1] end compatible |= [false] unless block_share @waiting[Thread.current] = [purpose, compatible] end @cv.broadcast end begin yield ensure synchronize do wait_for(:yield_shares) { @exclusive_thread && @exclusive_thread != Thread.current } if previous_wait @waiting[Thread.current] = previous_wait else @waiting.delete Thread.current end @sharing[Thread.current] = loose_shares if loose_shares end end end ``` Temporarily give up all held Share locks while executing the supplied block, allowing any `compatible` exclusive lock request to proceed. rails module ActiveSupport::Deprecation::Disallowed module ActiveSupport::Deprecation::Disallowed ============================================== disallowed\_warnings[W] Sets the criteria used to identify deprecation messages which should be disallowed. Can be an array containing strings, symbols, or regular expressions. (Symbols are treated as strings). These are compared against the text of the generated deprecation warning. Additionally the scalar symbol `:all` may be used to treat all deprecations as disallowed. Deprecations matching a substring or regular expression will be handled using the configured `ActiveSupport::Deprecation.disallowed_behavior` rather than `ActiveSupport::Deprecation.behavior` disallowed\_warnings() Show source ``` # File activesupport/lib/active_support/deprecation/disallowed.rb, line 21 def disallowed_warnings @disallowed_warnings ||= [] end ``` Returns the configured criteria used to identify deprecation messages which should be treated as disallowed. rails class ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy class ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy ================================================================== Parent: ActiveSupport::Deprecation::DeprecationProxy [`DeprecatedInstanceVariableProxy`](deprecatedinstancevariableproxy) transforms an instance variable into a deprecated one. It takes an instance of a class, a method on that class and an instance variable. It optionally takes a deprecator as the last argument. The deprecator defaults to `ActiveSupport::Deprecator` if none is specified. ``` class Example def initialize @request = ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy.new(self, :request, :@request) @_request = :special_request end def request @_request end def old_request @request end end example = Example.new # => #<Example:0x007fb9b31090b8 @_request=:special_request, @request=:special_request> example.old_request.to_s # => DEPRECATION WARNING: @request is deprecated! Call request.to_s instead of @request.to_s (Backtrace information…) "special_request" example.request.to_s # => "special_request" ``` new(instance, method, var = "@ Show source ``` # File activesupport/lib/active_support/deprecation/proxy_wrappers.rb, line 89 def initialize(instance, method, var = "@#{method}", deprecator = ActiveSupport::Deprecation.instance) @instance = instance @method = method @var = var @deprecator = deprecator end ``` rails module ActiveSupport::Deprecation::Reporting module ActiveSupport::Deprecation::Reporting ============================================= RAILS\_GEM\_ROOT gem\_name[RW] Name of gem where method is deprecated silenced[W] Whether to print a message (silent mode) allow(allowed\_warnings = :all, if: true) { || ... } Show source ``` # File activesupport/lib/active_support/deprecation/reporting.rb, line 72 def allow(allowed_warnings = :all, if: true, &block) conditional = binding.local_variable_get(:if) conditional = conditional.call if conditional.respond_to?(:call) if conditional @explicitly_allowed_warnings.bind(allowed_warnings, &block) else yield end end ``` Allow previously disallowed deprecation warnings within the block. `allowed_warnings` can be an array containing strings, symbols, or regular expressions. (Symbols are treated as strings). These are compared against the text of deprecation warning messages generated within the block. Matching warnings will be exempt from the rules set by `ActiveSupport::Deprecation.disallowed_warnings` The optional `if:` argument accepts a truthy/falsy value or an object that responds to `.call`. If truthy, then matching warnings will be allowed. If falsey then the method yields to the block without allowing the warning. ``` ActiveSupport::Deprecation.disallowed_behavior = :raise ActiveSupport::Deprecation.disallowed_warnings = [ "something broke" ] ActiveSupport::Deprecation.warn('something broke!') # => ActiveSupport::DeprecationException ActiveSupport::Deprecation.allow ['something broke'] do ActiveSupport::Deprecation.warn('something broke!') end # => nil ActiveSupport::Deprecation.allow ['something broke'], if: Rails.env.production? do ActiveSupport::Deprecation.warn('something broke!') end # => ActiveSupport::DeprecationException for dev/test, nil for production ``` deprecation\_warning(deprecated\_method\_name, message = nil, caller\_backtrace = nil) Show source ``` # File activesupport/lib/active_support/deprecation/reporting.rb, line 86 def deprecation_warning(deprecated_method_name, message = nil, caller_backtrace = nil) caller_backtrace ||= caller_locations(2) deprecated_method_warning(deprecated_method_name, message).tap do |msg| warn(msg, caller_backtrace) end end ``` silence(&block) Show source ``` # File activesupport/lib/active_support/deprecation/reporting.rb, line 40 def silence(&block) @silenced_thread.bind(true, &block) end ``` Silence deprecation warnings within the block. ``` ActiveSupport::Deprecation.warn('something broke!') # => "DEPRECATION WARNING: something broke! (called from your_code.rb:1)" ActiveSupport::Deprecation.silence do ActiveSupport::Deprecation.warn('something broke!') end # => nil ``` silenced() Show source ``` # File activesupport/lib/active_support/deprecation/reporting.rb, line 82 def silenced @silenced || @silenced_thread.value end ``` warn(message = nil, callstack = nil) Show source ``` # File activesupport/lib/active_support/deprecation/reporting.rb, line 18 def warn(message = nil, callstack = nil) return if silenced callstack ||= caller_locations(2) deprecation_message(callstack, message).tap do |m| if deprecation_disallowed?(message) disallowed_behavior.each { |b| b.call(m, callstack, deprecation_horizon, gem_name) } else behavior.each { |b| b.call(m, callstack, deprecation_horizon, gem_name) } end end end ``` Outputs a deprecation warning to the output configured by `ActiveSupport::Deprecation.behavior`. ``` ActiveSupport::Deprecation.warn('something broke!') # => "DEPRECATION WARNING: something broke! (called from your_code.rb:1)" ``` rails module ActiveSupport::Deprecation::MethodWrapper module ActiveSupport::Deprecation::MethodWrapper ================================================= deprecate\_methods(target\_module, \*method\_names) Show source ``` # File activesupport/lib/active_support/deprecation/method_wrappers.rb, line 52 def deprecate_methods(target_module, *method_names) options = method_names.extract_options! deprecator = options.delete(:deprecator) || self method_names += options.keys mod = nil method_names.each do |method_name| message = options[method_name] if target_module.method_defined?(method_name) || target_module.private_method_defined?(method_name) method = target_module.instance_method(method_name) target_module.module_eval do redefine_method(method_name) do |*args, &block| deprecator.deprecation_warning(method_name, message) method.bind_call(self, *args, &block) end ruby2_keywords(method_name) end else mod ||= Module.new mod.module_eval do define_method(method_name) do |*args, &block| deprecator.deprecation_warning(method_name, message) super(*args, &block) end ruby2_keywords(method_name) end end end target_module.prepend(mod) if mod end ``` Declare that a method has been deprecated. ``` class Fred def aaa; end def bbb; end def ccc; end def ddd; end def eee; end end ``` Using the default deprecator: ``` ActiveSupport::Deprecation.deprecate_methods(Fred, :aaa, bbb: :zzz, ccc: 'use Bar#ccc instead') # => Fred Fred.new.aaa # DEPRECATION WARNING: aaa is deprecated and will be removed from Rails 5.1. (called from irb_binding at (irb):10) # => nil Fred.new.bbb # DEPRECATION WARNING: bbb is deprecated and will be removed from Rails 5.1 (use zzz instead). (called from irb_binding at (irb):11) # => nil Fred.new.ccc # DEPRECATION WARNING: ccc is deprecated and will be removed from Rails 5.1 (use Bar#ccc instead). (called from irb_binding at (irb):12) # => nil ``` Passing in a custom deprecator: ``` custom_deprecator = ActiveSupport::Deprecation.new('next-release', 'MyGem') ActiveSupport::Deprecation.deprecate_methods(Fred, ddd: :zzz, deprecator: custom_deprecator) # => [:ddd] Fred.new.ddd DEPRECATION WARNING: ddd is deprecated and will be removed from MyGem next-release (use zzz instead). (called from irb_binding at (irb):15) # => nil ``` Using a custom deprecator directly: ``` custom_deprecator = ActiveSupport::Deprecation.new('next-release', 'MyGem') custom_deprecator.deprecate_methods(Fred, eee: :zzz) # => [:eee] Fred.new.eee DEPRECATION WARNING: eee is deprecated and will be removed from MyGem next-release (use zzz instead). (called from irb_binding at (irb):18) # => nil ``` Calls superclass method rails module ActiveSupport::Deprecation::DeprecatedConstantAccessor module ActiveSupport::Deprecation::DeprecatedConstantAccessor ============================================================== [`DeprecatedConstantAccessor`](deprecatedconstantaccessor) transforms a constant into a deprecated one by hooking `const_missing`. It takes the names of an old (deprecated) constant and of a new constant (both in string form) and optionally a deprecator. The deprecator defaults to `ActiveSupport::Deprecator` if none is specified. The deprecated constant now returns the same object as the new one rather than a proxy object, so it can be used transparently in `rescue` blocks etc. ``` PLANETS = %w(mercury venus earth mars jupiter saturn uranus neptune pluto) # (In a later update, the original implementation of `PLANETS` has been removed.) PLANETS_POST_2006 = %w(mercury venus earth mars jupiter saturn uranus neptune) include ActiveSupport::Deprecation::DeprecatedConstantAccessor deprecate_constant 'PLANETS', 'PLANETS_POST_2006' PLANETS.map { |planet| planet.capitalize } # => DEPRECATION WARNING: PLANETS is deprecated! Use PLANETS_POST_2006 instead. (Backtrace information…) ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune"] ``` included(base) Show source ``` # File activesupport/lib/active_support/deprecation/constant_accessor.rb, line 29 def self.included(base) require "active_support/inflector/methods" extension = Module.new do def const_missing(missing_const_name) if class_variable_defined?(:@@_deprecated_constants) if (replacement = class_variable_get(:@@_deprecated_constants)[missing_const_name.to_s]) replacement[:deprecator].warn(replacement[:message] || "#{name}::#{missing_const_name} is deprecated! Use #{replacement[:new]} instead.", caller_locations) return ActiveSupport::Inflector.constantize(replacement[:new].to_s) end end super end def deprecate_constant(const_name, new_constant, message: nil, deprecator: ActiveSupport::Deprecation.instance) class_variable_set(:@@_deprecated_constants, {}) unless class_variable_defined?(:@@_deprecated_constants) class_variable_get(:@@_deprecated_constants)[const_name.to_s] = { new: new_constant, message: message, deprecator: deprecator } end end base.singleton_class.prepend extension end ``` const\_missing(missing\_const\_name) Show source ``` # File activesupport/lib/active_support/deprecation/constant_accessor.rb, line 33 def const_missing(missing_const_name) if class_variable_defined?(:@@_deprecated_constants) if (replacement = class_variable_get(:@@_deprecated_constants)[missing_const_name.to_s]) replacement[:deprecator].warn(replacement[:message] || "#{name}::#{missing_const_name} is deprecated! Use #{replacement[:new]} instead.", caller_locations) return ActiveSupport::Inflector.constantize(replacement[:new].to_s) end end super end ``` Calls superclass method deprecate\_constant(const\_name, new\_constant, message: nil, deprecator: ActiveSupport::Deprecation.instance) Show source ``` # File activesupport/lib/active_support/deprecation/constant_accessor.rb, line 43 def deprecate_constant(const_name, new_constant, message: nil, deprecator: ActiveSupport::Deprecation.instance) class_variable_set(:@@_deprecated_constants, {}) unless class_variable_defined?(:@@_deprecated_constants) class_variable_get(:@@_deprecated_constants)[const_name.to_s] = { new: new_constant, message: message, deprecator: deprecator } end ```
programming_docs
rails class ActiveSupport::Deprecation::DeprecatedConstantProxy class ActiveSupport::Deprecation::DeprecatedConstantProxy ========================================================== Parent: [Module](../../module) [`DeprecatedConstantProxy`](deprecatedconstantproxy) transforms a constant into a deprecated one. It takes the names of an old (deprecated) constant and of a new constant (both in string form) and optionally a deprecator. The deprecator defaults to `ActiveSupport::Deprecator` if none is specified. The deprecated constant now returns the value of the new one. ``` PLANETS = %w(mercury venus earth mars jupiter saturn uranus neptune pluto) # (In a later update, the original implementation of `PLANETS` has been removed.) PLANETS_POST_2006 = %w(mercury venus earth mars jupiter saturn uranus neptune) PLANETS = ActiveSupport::Deprecation::DeprecatedConstantProxy.new('PLANETS', 'PLANETS_POST_2006') PLANETS.map { |planet| planet.capitalize } # => DEPRECATION WARNING: PLANETS is deprecated! Use PLANETS_POST_2006 instead. (Backtrace information…) ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune"] ``` new(\*args, \*\*options, &block) Show source ``` # File activesupport/lib/active_support/deprecation/proxy_wrappers.rb, line 124 def self.new(*args, **options, &block) object = args.first return object unless object super end ``` Calls superclass method new(old\_const, new\_const, deprecator = ActiveSupport::Deprecation.instance, message: " Show source ``` # File activesupport/lib/active_support/deprecation/proxy_wrappers.rb, line 131 def initialize(old_const, new_const, deprecator = ActiveSupport::Deprecation.instance, message: "#{old_const} is deprecated! Use #{new_const} instead.") Kernel.require "active_support/inflector/methods" @old_const = old_const @new_const = new_const @deprecator = deprecator @message = message end ``` class() Show source ``` # File activesupport/lib/active_support/deprecation/proxy_wrappers.rb, line 157 def class target.class end ``` Returns the class of the new constant. ``` PLANETS_POST_2006 = %w(mercury venus earth mars jupiter saturn uranus neptune) PLANETS = ActiveSupport::Deprecation::DeprecatedConstantProxy.new('PLANETS', 'PLANETS_POST_2006') PLANETS.class # => Array ``` inspect() Show source ``` # File activesupport/lib/active_support/deprecation/proxy_wrappers.rb, line 144 def inspect target.inspect end ``` Don't give a deprecation warning on inspect since test/unit and error logs rely on it for diagnostics. rails class ActiveSupport::Deprecation::DeprecatedObjectProxy class ActiveSupport::Deprecation::DeprecatedObjectProxy ======================================================== Parent: ActiveSupport::Deprecation::DeprecationProxy [`DeprecatedObjectProxy`](deprecatedobjectproxy) transforms an object into a deprecated one. It takes an object, a deprecation message and optionally a deprecator. The deprecator defaults to `ActiveSupport::Deprecator` if none is specified. ``` deprecated_object = ActiveSupport::Deprecation::DeprecatedObjectProxy.new(Object.new, "This object is now deprecated") # => #<Object:0x007fb9b34c34b0> deprecated_object.to_s DEPRECATION WARNING: This object is now deprecated. (Backtrace) # => "#<Object:0x007fb9b34c34b0>" ``` new(object, message, deprecator = ActiveSupport::Deprecation.instance) Show source ``` # File activesupport/lib/active_support/deprecation/proxy_wrappers.rb, line 40 def initialize(object, message, deprecator = ActiveSupport::Deprecation.instance) @object = object @message = message @deprecator = deprecator end ``` rails module ActiveSupport::Deprecation::Behavior module ActiveSupport::Deprecation::Behavior ============================================ [`Behavior`](behavior) module allows to determine how to display deprecation messages. You can create a custom behavior or set any from the `DEFAULT_BEHAVIORS` constant. Available behaviors are: `raise` Raise `ActiveSupport::DeprecationException`. `stderr` Log all deprecation warnings to `$stderr`. `log` Log all deprecation warnings to `Rails.logger`. `notify` Use `ActiveSupport::Notifications` to notify `deprecation.rails`. `silence` Do nothing. On Rails, set `config.active_support.report_deprecations = false` to disable all behaviors. Setting behaviors only affects deprecations that happen after boot time. For more information you can read the documentation of the `behavior=` method. debug[RW] Whether to print a backtrace along with the warning. behavior() Show source ``` # File activesupport/lib/active_support/deprecation/behaviors.rb, line 66 def behavior @behavior ||= [DEFAULT_BEHAVIORS[:stderr]] end ``` Returns the current behavior or if one isn't set, defaults to `:stderr`. behavior=(behavior) Show source ``` # File activesupport/lib/active_support/deprecation/behaviors.rb, line 99 def behavior=(behavior) @behavior = Array(behavior).map { |b| DEFAULT_BEHAVIORS[b] || arity_coerce(b) } end ``` Sets the behavior to the specified value. Can be a single value, array, or an object that responds to `call`. Available behaviors: `raise` Raise `ActiveSupport::DeprecationException`. `stderr` Log all deprecation warnings to `$stderr`. `log` Log all deprecation warnings to `Rails.logger`. `notify` Use `ActiveSupport::Notifications` to notify `deprecation.rails`. `silence` Do nothing. Setting behaviors only affects deprecations that happen after boot time. [`Deprecation`](../deprecation) warnings raised by gems are not affected by this setting because they happen before Rails boots up. ``` ActiveSupport::Deprecation.behavior = :stderr ActiveSupport::Deprecation.behavior = [:stderr, :log] ActiveSupport::Deprecation.behavior = MyCustomHandler ActiveSupport::Deprecation.behavior = ->(message, callstack, deprecation_horizon, gem_name) { # custom stuff } ``` If you are using Rails, you can set `config.active_support.report_deprecations = false` to disable all deprecation behaviors. This is similar to the `silence` option but more performant. disallowed\_behavior() Show source ``` # File activesupport/lib/active_support/deprecation/behaviors.rb, line 71 def disallowed_behavior @disallowed_behavior ||= [DEFAULT_BEHAVIORS[:raise]] end ``` Returns the current behavior for disallowed deprecations or if one isn't set, defaults to `:raise`. disallowed\_behavior=(behavior) Show source ``` # File activesupport/lib/active_support/deprecation/behaviors.rb, line 107 def disallowed_behavior=(behavior) @disallowed_behavior = Array(behavior).map { |b| DEFAULT_BEHAVIORS[b] || arity_coerce(b) } end ``` Sets the behavior for disallowed deprecations (those configured by [`ActiveSupport::Deprecation.disallowed_warnings=`](disallowed#attribute-i-disallowed_warnings)) to the specified value. As with `behavior=`, this can be a single value, array, or an object that responds to `call`. rails module ActiveSupport::Multibyte::Unicode module ActiveSupport::Multibyte::Unicode ========================================= UNICODE\_VERSION The [`Unicode`](unicode) version that is supported by the implementation compose(codepoints) Show source ``` # File activesupport/lib/active_support/multibyte/unicode.rb, line 21 def compose(codepoints) codepoints.pack("U*").unicode_normalize(:nfc).codepoints end ``` Compose decomposed characters to the composed form. decompose(type, codepoints) Show source ``` # File activesupport/lib/active_support/multibyte/unicode.rb, line 12 def decompose(type, codepoints) if type == :compatibility codepoints.pack("U*").unicode_normalize(:nfkd).codepoints else codepoints.pack("U*").unicode_normalize(:nfd).codepoints end end ``` Decompose composed characters to the decomposed form. tidy\_bytes(string, force = false) Show source ``` # File activesupport/lib/active_support/multibyte/unicode.rb, line 32 def tidy_bytes(string, force = false) return string if string.empty? || string.ascii_only? return recode_windows1252_chars(string) if force string.scrub { |bad| recode_windows1252_chars(bad) } end ``` Replaces all ISO-8859-1 or CP1252 characters by their UTF-8 equivalent resulting in a valid UTF-8 string. Passing `true` will forcibly tidy all bytes, assuming that the string's encoding is entirely CP1252 or ISO-8859-1. rails class ActiveSupport::Multibyte::Chars class ActiveSupport::Multibyte::Chars ====================================== Parent: [Object](../../object) Included modules: [`Chars`](chars) enables you to work transparently with UTF-8 encoding in the Ruby [`String`](../../string) class without having extensive knowledge about the encoding. A [`Chars`](chars) object accepts a string upon initialization and proxies [`String`](../../string) methods in an encoding safe manner. All the normal [`String`](../../string) methods are also implemented on the proxy. [`String`](../../string) methods are proxied through the [`Chars`](chars) object, and can be accessed through the `mb_chars` method. Methods which would normally return a [`String`](../../string) object now return a [`Chars`](chars) object so methods can be chained. ``` 'The Perfect String '.mb_chars.downcase.strip # => #<ActiveSupport::Multibyte::Chars:0x007fdc434ccc10 @wrapped_string="the perfect string"> ``` [`Chars`](chars) objects are perfectly interchangeable with [`String`](../../string) objects as long as no explicit class checks are made. If certain methods do explicitly check the class, call `to_s` before you pass chars objects to them. ``` bad.explicit_checking_method 'T'.mb_chars.downcase.to_s ``` The default [`Chars`](chars) implementation assumes that the encoding of the string is UTF-8, if you want to handle different encodings you can write your own multibyte string handler and configure it through [`ActiveSupport::Multibyte.proxy_class`](../multibyte#method-c-proxy_class). ``` class CharsForUTF32 def size @wrapped_string.size / 4 end def self.accepts?(string) string.length % 4 == 0 end end ActiveSupport::Multibyte.proxy_class = CharsForUTF32 ``` to\_s[R] to\_str[R] wrapped\_string[R] new(string) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 54 def initialize(string) @wrapped_string = string @wrapped_string.force_encoding(Encoding::UTF_8) unless @wrapped_string.frozen? end ``` Creates a new [`Chars`](chars) instance by wrapping *string*. compose() Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 138 def compose chars(Unicode.compose(@wrapped_string.codepoints.to_a).pack("U*")) end ``` Performs composition on all the characters. ``` 'é'.length # => 1 'é'.mb_chars.compose.to_s.length # => 1 ``` decompose() Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 130 def decompose chars(Unicode.decompose(:canonical, @wrapped_string.codepoints.to_a).pack("U*")) end ``` Performs canonical decomposition on all the characters. ``` 'é'.length # => 1 'é'.mb_chars.decompose.to_s.length # => 2 ``` grapheme\_length() Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 146 def grapheme_length @wrapped_string.grapheme_clusters.length end ``` Returns the number of grapheme clusters in the string. ``` 'क्षि'.mb_chars.length # => 4 'क्षि'.mb_chars.grapheme_length # => 2 ``` limit(limit) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 113 def limit(limit) chars(@wrapped_string.truncate_bytes(limit, omission: nil)) end ``` Limits the byte size of the string to a number of bytes without breaking characters. Usable when the storage for a string is limited for some reason. ``` 'こんにちは'.mb_chars.limit(7).to_s # => "こん" ``` method\_missing(method, \*args, &block) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 60 def method_missing(method, *args, &block) result = @wrapped_string.__send__(method, *args, &block) if method.end_with?("!") self if result else result.kind_of?(String) ? chars(result) : result end end ``` Forward all undefined methods to the wrapped string. respond\_to\_missing?(method, include\_private) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 72 def respond_to_missing?(method, include_private) @wrapped_string.respond_to?(method, include_private) end ``` Returns `true` if *obj* responds to the given method. Private methods are included in the search only if the optional second parameter evaluates to `true`. reverse() Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 104 def reverse chars(@wrapped_string.grapheme_clusters.reverse.join) end ``` Reverses all characters in the string. ``` 'Café'.mb_chars.reverse.to_s # => 'éfaC' ``` slice!(\*args) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 94 def slice!(*args) string_sliced = @wrapped_string.slice!(*args) if string_sliced chars(string_sliced) end end ``` Works like `String#slice!`, but returns an instance of [`Chars`](chars), or `nil` if the string was not modified. The string will not be modified if the range given is out of bounds ``` string = 'Welcome' string.mb_chars.slice!(3) # => #<ActiveSupport::Multibyte::Chars:0x000000038109b8 @wrapped_string="c"> string # => 'Welome' string.mb_chars.slice!(0..3) # => #<ActiveSupport::Multibyte::Chars:0x00000002eb80a0 @wrapped_string="Welo"> string # => 'me' ``` split(\*args) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 81 def split(*args) @wrapped_string.split(*args).map { |i| self.class.new(i) } end ``` Works just like `String#split`, with the exception that the items in the resulting list are [`Chars`](chars) instances instead of [`String`](../../string). This makes chaining methods easier. ``` 'Café périferôl'.mb_chars.split(/é/).map { |part| part.upcase.to_s } # => ["CAF", " P", "RIFERÔL"] ``` tidy\_bytes(force = false) Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 155 def tidy_bytes(force = false) chars(Unicode.tidy_bytes(@wrapped_string, force)) end ``` Replaces all ISO-8859-1 or CP1252 characters by their UTF-8 equivalent resulting in a valid UTF-8 string. Passing `true` will forcibly tidy all bytes, assuming that the string's encoding is entirely CP1252 or ISO-8859-1. titlecase() Alias for: [titleize](chars#method-i-titleize) titleize() Show source ``` # File activesupport/lib/active_support/multibyte/chars.rb, line 121 def titleize chars(downcase.to_s.gsub(/\b('?\S)/u) { $1.upcase }) end ``` Capitalizes the first letter of every word, when possible. ``` "ÉL QUE SE ENTERÓ".mb_chars.titleize.to_s # => "Él Que Se Enteró" "日本語".mb_chars.titleize.to_s # => "日本語" ``` Also aliased as: [titlecase](chars#method-i-titlecase) rails class ActiveSupport::Notifications::Instrumenter class ActiveSupport::Notifications::Instrumenter ================================================= Parent: [Object](../../object) Instrumenters are stored in a thread local. id[R] new(notifier) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 11 def initialize(notifier) @id = unique_id @notifier = notifier end ``` finish(name, payload) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 44 def finish(name, payload) @notifier.finish name, @id, payload end ``` Send a finish notification with `name` and `payload`. finish\_with\_state(listeners\_state, name, payload) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 48 def finish_with_state(listeners_state, name, payload) @notifier.finish name, @id, payload, listeners_state end ``` instrument(name, payload = {}) { |payload| ... } Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 20 def instrument(name, payload = {}) # some of the listeners might have state listeners_state = start name, payload begin yield payload if block_given? rescue Exception => e payload[:exception] = [e.class.name, e.message] payload[:exception_object] = e raise e ensure finish_with_state listeners_state, name, payload end end ``` Given a block, instrument it by measuring the time taken to execute and publish it. Without a block, simply send a message via the notifier. Notice that events get sent even if an error occurs in the passed-in block. start(name, payload) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 39 def start(name, payload) @notifier.start name, @id, payload end ``` Send a start notification with `name` and `payload`. rails class ActiveSupport::Notifications::Event class ActiveSupport::Notifications::Event ========================================== Parent: [Object](../../object) children[R] end[R] name[R] payload[RW] time[R] transaction\_id[R] new(name, start, ending, transaction\_id, payload) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 62 def initialize(name, start, ending, transaction_id, payload) @name = name @payload = payload.dup @time = start ? start.to_f * 1_000.0 : start @transaction_id = transaction_id @end = ending ? ending.to_f * 1_000.0 : ending @children = [] @cpu_time_start = 0.0 @cpu_time_finish = 0.0 @allocation_count_start = 0 @allocation_count_finish = 0 end ``` <<(event) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 136 def <<(event) @children << event end ``` allocations() Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 116 def allocations @allocation_count_finish - @allocation_count_start end ``` Returns the number of allocations made since the call to `start!` and the call to `finish!` cpu\_time() Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 104 def cpu_time @cpu_time_finish - @cpu_time_start end ``` Returns the CPU time (in milliseconds) passed since the call to `start!` and the call to `finish!` duration() Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 132 def duration self.end - time end ``` Returns the difference in milliseconds between when the execution of the event started and when it ended. ``` ActiveSupport::Notifications.subscribe('wait') do |*args| @event = ActiveSupport::Notifications::Event.new(*args) end ActiveSupport::Notifications.instrument('wait') do sleep 1 end @event.duration # => 1000.138 ``` finish!() Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 96 def finish! @cpu_time_finish = now_cpu @end = now @allocation_count_finish = now_allocations end ``` Record information at the time this event finishes idle\_time() Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 110 def idle_time duration - cpu_time end ``` Returns the idle time time (in milliseconds) passed since the call to `start!` and the call to `finish!` parent\_of?(event) Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 140 def parent_of?(event) @children.include? event end ``` record() { |payload| ... } Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 75 def record start! begin yield payload if block_given? rescue Exception => e payload[:exception] = [e.class.name, e.message] payload[:exception_object] = e raise e ensure finish! end end ``` start!() Show source ``` # File activesupport/lib/active_support/notifications/instrumenter.rb, line 89 def start! @time = now @cpu_time_start = now_cpu @allocation_count_start = now_allocations end ``` Record information at the time this event starts
programming_docs
rails module ActiveModel::Conversion module ActiveModel::Conversion =============================== Active Model Conversion ----------------------- Handles default conversions: [`to_model`](conversion#method-i-to_model), [`to_key`](conversion#method-i-to_key), [`to_param`](conversion#method-i-to_param), and to\_partial\_path. Let's take for example this non-persisted object. ``` class ContactMessage include ActiveModel::Conversion # ContactMessage are never persisted in the DB def persisted? false end end cm = ContactMessage.new cm.to_model == cm # => true cm.to_key # => nil cm.to_param # => nil cm.to_partial_path # => "contact_messages/contact_message" ``` to\_key() Show source ``` # File activemodel/lib/active_model/conversion.rb, line 59 def to_key key = respond_to?(:id) && id key ? [key] : nil end ``` Returns an [`Array`](../array) of all key attributes if any of the attributes is set, whether or not the object is persisted. Returns `nil` if there are no key attributes. ``` class Person include ActiveModel::Conversion attr_accessor :id def initialize(id) @id = id end end person = Person.new(1) person.to_key # => [1] ``` to\_model() Show source ``` # File activemodel/lib/active_model/conversion.rb, line 41 def to_model self end ``` If your object is already designed to implement all of the Active Model you can use the default `:to_model` implementation, which simply returns `self`. ``` class Person include ActiveModel::Conversion end person = Person.new person.to_model == person # => true ``` If your model does not act like an Active Model object, then you should define `:to_model` yourself returning a proxy object that wraps your object with Active Model compliant methods. to\_param() Show source ``` # File activemodel/lib/active_model/conversion.rb, line 82 def to_param (persisted? && key = to_key) ? key.join("-") : nil end ``` Returns a `string` representing the object's key suitable for use in URLs, or `nil` if `persisted?` is `false`. ``` class Person include ActiveModel::Conversion attr_accessor :id def initialize(id) @id = id end def persisted? true end end person = Person.new(1) person.to_param # => "1" ``` to\_partial\_path() Show source ``` # File activemodel/lib/active_model/conversion.rb, line 95 def to_partial_path self.class._to_partial_path end ``` Returns a `string` identifying the path associated with the object. ActionPack uses this to find a suitable partial to represent the object. ``` class Person include ActiveModel::Conversion end person = Person.new person.to_partial_path # => "people/person" ``` rails class ActiveModel::MissingAttributeError class ActiveModel::MissingAttributeError ========================================= Parent: NoMethodError Raised when an attribute is not defined. ``` class User < ActiveRecord::Base has_many :pets end user = User.first user.pets.select(:id).first.user_id # => ActiveModel::MissingAttributeError: missing attribute: user_id ``` rails module ActiveModel::Serialization module ActiveModel::Serialization ================================== Active Model Serialization -------------------------- Provides a basic serialization to a [`serializable_hash`](serialization#method-i-serializable_hash) for your objects. A minimal implementation could be: ``` class Person include ActiveModel::Serialization attr_accessor :name def attributes {'name' => nil} end end ``` Which would provide you with: ``` person = Person.new person.serializable_hash # => {"name"=>nil} person.name = "Bob" person.serializable_hash # => {"name"=>"Bob"} ``` An `attributes` hash must be defined and should contain any attributes you need to be serialized. `Attributes` must be strings, not symbols. When called, serializable hash will use instance methods that match the name of the attributes hash's keys. In order to override this behavior, take a look at the private method `read_attribute_for_serialization`. [`ActiveModel::Serializers::JSON`](serializers/json) module automatically includes the `ActiveModel::Serialization` module, so there is no need to explicitly include `ActiveModel::Serialization`. A minimal implementation including JSON would be: ``` class Person include ActiveModel::Serializers::JSON attr_accessor :name def attributes {'name' => nil} end end ``` Which would provide you with: ``` person = Person.new person.serializable_hash # => {"name"=>nil} person.as_json # => {"name"=>nil} person.to_json # => "{\"name\":null}" person.name = "Bob" person.serializable_hash # => {"name"=>"Bob"} person.as_json # => {"name"=>"Bob"} person.to_json # => "{\"name\":\"Bob\"}" ``` Valid options are `:only`, `:except`, `:methods` and `:include`. The following are all valid examples: ``` person.serializable_hash(only: 'name') person.serializable_hash(include: :address) person.serializable_hash(include: { address: { only: 'city' }}) ``` serializable\_hash(options = nil) Show source ``` # File activemodel/lib/active_model/serialization.rb, line 125 def serializable_hash(options = nil) attribute_names = self.attribute_names return serializable_attributes(attribute_names) if options.blank? if only = options[:only] attribute_names &= Array(only).map(&:to_s) elsif except = options[:except] attribute_names -= Array(except).map(&:to_s) end hash = serializable_attributes(attribute_names) Array(options[:methods]).each { |m| hash[m.to_s] = send(m) } serializable_add_includes(options) do |association, records, opts| hash[association.to_s] = if records.respond_to?(:to_ary) records.to_ary.map { |a| a.serializable_hash(opts) } else records.serializable_hash(opts) end end hash end ``` Returns a serialized hash of your object. ``` class Person include ActiveModel::Serialization attr_accessor :name, :age def attributes {'name' => nil, 'age' => nil} end def capitalized_name name.capitalize end end person = Person.new person.name = 'bob' person.age = 22 person.serializable_hash # => {"name"=>"bob", "age"=>22} person.serializable_hash(only: :name) # => {"name"=>"bob"} person.serializable_hash(except: :name) # => {"age"=>22} person.serializable_hash(methods: :capitalized_name) # => {"name"=>"bob", "age"=>22, "capitalized_name"=>"Bob"} ``` Example with `:include` option ``` class User include ActiveModel::Serializers::JSON attr_accessor :name, :notes # Emulate has_many :notes def attributes {'name' => nil} end end class Note include ActiveModel::Serializers::JSON attr_accessor :title, :text def attributes {'title' => nil, 'text' => nil} end end note = Note.new note.title = 'Battle of Austerlitz' note.text = 'Some text here' user = User.new user.name = 'Napoleon' user.notes = [note] user.serializable_hash # => {"name" => "Napoleon"} user.serializable_hash(include: { notes: { only: 'title' }}) # => {"name" => "Napoleon", "notes" => [{"title"=>"Battle of Austerlitz"}]} ``` rails class ActiveModel::StrictValidationFailed class ActiveModel::StrictValidationFailed ========================================== Parent: StandardError Raised when a validation cannot be corrected by end users and are considered exceptional. ``` class Person include ActiveModel::Validations attr_accessor :name validates_presence_of :name, strict: true end person = Person.new person.name = nil person.valid? # => ActiveModel::StrictValidationFailed: Name can't be blank ``` rails module ActiveModel::Dirty module ActiveModel::Dirty ========================== Included modules: [ActiveModel::AttributeMethods](attributemethods) Active Model Dirty ------------------ Provides a way to track changes in your object in the same way as Active Record does. The requirements for implementing [`ActiveModel::Dirty`](dirty) are: * `include ActiveModel::Dirty` in your object. * Call `define_attribute_methods` passing each method you want to track. * Call `[attr_name]_will_change!` before each change to the tracked attribute. * Call `changes_applied` after the changes are persisted. * Call `clear_changes_information` when you want to reset the changes information. * Call `restore_attributes` when you want to restore previous data. A minimal implementation could be: ``` class Person include ActiveModel::Dirty define_attribute_methods :name def initialize @name = nil end def name @name end def name=(val) name_will_change! unless val == @name @name = val end def save # do persistence work changes_applied end def reload! # get the values from the persistence layer clear_changes_information end def rollback! restore_attributes end end ``` A newly instantiated `Person` object is unchanged: ``` person = Person.new person.changed? # => false ``` Change the name: ``` person.name = 'Bob' person.changed? # => true person.name_changed? # => true person.name_changed?(from: nil, to: "Bob") # => true person.name_was # => nil person.name_change # => [nil, "Bob"] person.name = 'Bill' person.name_change # => [nil, "Bill"] ``` Save the changes: ``` person.save person.changed? # => false person.name_changed? # => false ``` Reset the changes: ``` person.previous_changes # => {"name" => [nil, "Bill"]} person.name_previously_changed? # => true person.name_previously_changed?(from: nil, to: "Bill") # => true person.name_previous_change # => [nil, "Bill"] person.name_previously_was # => nil person.reload! person.previous_changes # => {} ``` Rollback the changes: ``` person.name = "Uncle Bob" person.rollback! person.name # => "Bill" person.name_changed? # => false ``` Assigning the same value leaves the attribute unchanged: ``` person.name = 'Bill' person.name_changed? # => false person.name_change # => nil ``` Which attributes have changed? ``` person.name = 'Bob' person.changed # => ["name"] person.changes # => {"name" => ["Bill", "Bob"]} ``` If an attribute is modified in-place then make use of `[attribute_name]_will_change!` to mark that the attribute is changing. Otherwise Active Model can't track changes to in-place attributes. Note that Active Record can detect in-place modifications automatically. You do not need to call `[attribute_name]_will_change!` on Active Record models. ``` person.name_will_change! person.name_change # => ["Bill", "Bill"] person.name << 'y' person.name_change # => ["Bill", "Billy"] ``` changed() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 173 def changed mutations_from_database.changed_attribute_names end ``` Returns an array with the name of the attributes with unsaved changes. ``` person.changed # => [] person.name = 'bob' person.changed # => ["name"] ``` changed?() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 164 def changed? mutations_from_database.any_changes? end ``` Returns `true` if any of the attributes has unsaved changes, `false` otherwise. ``` person.changed? # => false person.name = 'bob' person.changed? # => true ``` changed\_attributes() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 221 def changed_attributes mutations_from_database.changed_values end ``` Returns a hash of the attributes with unsaved changes indicating their original values like `attr => original value`. ``` person.name # => "bob" person.name = 'robert' person.changed_attributes # => {"name" => "bob"} ``` changes() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 231 def changes mutations_from_database.changes end ``` Returns a hash of changed attributes indicating their original and new values like `attr => [original value, new value]`. ``` person.changes # => {} person.name = 'bob' person.changes # => { "name" => ["bill", "bob"] } ``` changes\_applied() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 150 def changes_applied unless defined?(@attributes) mutations_from_database.finalize_changes end @mutations_before_last_save = mutations_from_database forget_attribute_assignments @mutations_from_database = nil end ``` Clears dirty data and moves `changes` to `previous_changes` and `mutations_from_database` to `mutations_before_last_save` respectively. clear\_attribute\_changes(attr\_names) Show source ``` # File activemodel/lib/active_model/dirty.rb, line 209 def clear_attribute_changes(attr_names) attr_names.each do |attr_name| clear_attribute_change(attr_name) end end ``` clear\_changes\_information() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 203 def clear_changes_information @mutations_before_last_save = nil forget_attribute_assignments @mutations_from_database = nil end ``` Clears all dirty data: current changes and previous changes. previous\_changes() Show source ``` # File activemodel/lib/active_model/dirty.rb, line 241 def previous_changes mutations_before_last_save.changes end ``` Returns a hash of attributes that were changed before the model was saved. ``` person.name # => "bob" person.name = 'robert' person.save person.previous_changes # => {"name" => ["bob", "robert"]} ``` restore\_attributes(attr\_names = changed) Show source ``` # File activemodel/lib/active_model/dirty.rb, line 198 def restore_attributes(attr_names = changed) attr_names.each { |attr_name| restore_attribute!(attr_name) } end ``` Restore all previous data of the provided attributes. rails module ActiveModel::Callbacks module ActiveModel::Callbacks ============================== Included modules: [ActiveSupport::Callbacks](../activesupport/callbacks) Active Model Callbacks ---------------------- Provides an interface for any class to have Active Record like callbacks. Like the Active Record methods, the callback chain is aborted as soon as one of the methods throws `:abort`. First, extend [`ActiveModel::Callbacks`](callbacks) from the class you are creating: ``` class MyModel extend ActiveModel::Callbacks end ``` Then define a list of methods that you want callbacks attached to: ``` define_model_callbacks :create, :update ``` This will provide all three standard callbacks (before, around and after) for both the `:create` and `:update` methods. To implement, you need to wrap the methods you want callbacks on in a block so that the callbacks get a chance to fire: ``` def create run_callbacks :create do # Your create action methods here end end ``` Then in your class, you can use the `before_create`, `after_create` and `around_create` methods, just as you would in an Active Record model. ``` before_create :action_before_create def action_before_create # Your code here end ``` When defining an around callback remember to yield to the block, otherwise it won't be executed: ``` around_create :log_status def log_status puts 'going to call the block...' yield puts 'block successfully called.' end ``` You can choose to have only specific callbacks by passing a hash to the `define_model_callbacks` method. ``` define_model_callbacks :create, only: [:after, :before] ``` Would only create the `after_create` and `before_create` callback methods in your class. NOTE: Calling the same callback multiple times will overwrite previous callback definitions. define\_model\_callbacks(\*callbacks) Show source ``` # File activemodel/lib/active_model/callbacks.rb, line 109 def define_model_callbacks(*callbacks) options = callbacks.extract_options! options = { skip_after_callbacks_if_terminated: true, scope: [:kind, :name], only: [:before, :around, :after] }.merge!(options) types = Array(options.delete(:only)) callbacks.each do |callback| define_callbacks(callback, options) types.each do |type| send("_define_#{type}_model_callback", self, callback) end end end ``` [`define_model_callbacks`](callbacks#method-i-define_model_callbacks) accepts the same options `define_callbacks` does, in case you want to overwrite a default. Besides that, it also accepts an `:only` option, where you can choose if you want all types (before, around or after) or just some. ``` define_model_callbacks :initializer, only: :after ``` Note, the `only: <type>` hash will apply to all callbacks defined on that method call. To get around this you can call the [`define_model_callbacks`](callbacks#method-i-define_model_callbacks) method as many times as you need. ``` define_model_callbacks :create, only: :after define_model_callbacks :update, only: :before define_model_callbacks :destroy, only: :around ``` Would create `after_create`, `before_update` and `around_destroy` methods only. You can pass in a class to before\_<type>, after\_<type> and around\_<type>, in which case the callback will call that class's <action>\_<type> method passing the object that the callback is being called on. ``` class MyModel extend ActiveModel::Callbacks define_model_callbacks :create before_create AnotherClass end class AnotherClass def self.before_create( obj ) # obj is the MyModel instance that the callback is being called on end end ``` NOTE: `method_name` passed to [`define_model_callbacks`](callbacks#method-i-define_model_callbacks) must not end with `!`, `?` or `=`. rails module ActiveModel::Type module ActiveModel::Type ========================= register(type\_name, klass = nil, &block) Show source ``` # File activemodel/lib/active_model/type.rb, line 29 def register(type_name, klass = nil, &block) registry.register(type_name, klass, &block) end ``` Add a new type to the registry, allowing it to be referenced as a symbol by [attribute](attributes/classmethods#method-i-attribute). rails module ActiveModel::AttributeAssignment module ActiveModel::AttributeAssignment ======================================== assign\_attributes(new\_attributes) Show source ``` # File activemodel/lib/active_model/attribute_assignment.rb, line 28 def assign_attributes(new_attributes) unless new_attributes.respond_to?(:each_pair) raise ArgumentError, "When assigning attributes, you must pass a hash as an argument, #{new_attributes.class} passed." end return if new_attributes.empty? _assign_attributes(sanitize_for_mass_assignment(new_attributes)) end ``` Allows you to set all the attributes by passing in a hash of attributes with keys matching the attribute names. If the passed hash responds to `permitted?` method and the return value of this method is `false` an `ActiveModel::ForbiddenAttributesError` exception is raised. ``` class Cat include ActiveModel::AttributeAssignment attr_accessor :name, :status end cat = Cat.new cat.assign_attributes(name: "Gorby", status: "yawning") cat.name # => 'Gorby' cat.status # => 'yawning' cat.assign_attributes(status: "sleeping") cat.name # => 'Gorby' cat.status # => 'sleeping' ``` Also aliased as: [attributes=](attributeassignment#method-i-attributes-3D) attributes=(new\_attributes) Alias for: [assign\_attributes](attributeassignment#method-i-assign_attributes) rails class ActiveModel::RangeError class ActiveModel::RangeError ============================== Parent: RangeError Raised when attribute values are out of range. rails class ActiveModel::Validator class ActiveModel::Validator ============================= Parent: [Object](../object) Active Model Validator ---------------------- A simple base class that can be used along with [`ActiveModel::Validations::ClassMethods.validates_with`](validations/classmethods#method-i-validates_with) ``` class Person include ActiveModel::Validations validates_with MyValidator end class MyValidator < ActiveModel::Validator def validate(record) if some_complex_logic record.errors.add(:base, "This record is invalid") end end private def some_complex_logic # ... end end ``` Any class that inherits from [`ActiveModel::Validator`](validator) must implement a method called `validate` which accepts a `record`. ``` class Person include ActiveModel::Validations validates_with MyValidator end class MyValidator < ActiveModel::Validator def validate(record) record # => The person instance being validated options # => Any non-standard options passed to validates_with end end ``` To cause a validation error, you must add to the `record`'s errors directly from within the validators message. ``` class MyValidator < ActiveModel::Validator def validate(record) record.errors.add :base, "This is some custom error message" record.errors.add :first_name, "This is some complex validation" # etc... end end ``` To add behavior to the initialize method, use the following signature: ``` class MyValidator < ActiveModel::Validator def initialize(options) super @my_custom_field = options[:field_name] || :first_name end end ``` Note that the validator is initialized only once for the whole application life cycle, and not on each validation run. The easiest way to add custom validators for validating individual attributes is with the convenient `ActiveModel::EachValidator`. ``` class TitleValidator < ActiveModel::EachValidator def validate_each(record, attribute, value) record.errors.add attribute, 'must be Mr., Mrs., or Dr.' unless %w(Mr. Mrs. Dr.).include?(value) end end ``` This can now be used in combination with the `validates` method (see `ActiveModel::Validations::ClassMethods.validates` for more on this). ``` class Person include ActiveModel::Validations attr_accessor :title validates :title, presence: true, title: true end ``` It can be useful to access the class that is using that validator when there are prerequisites such as an `attr_accessor` being present. This class is accessible via `options[:class]` in the constructor. To set up your validator override the constructor. ``` class MyValidator < ActiveModel::Validator def initialize(options={}) super options[:class].attr_accessor :custom_attribute end end ``` options[R] kind() Show source ``` # File activemodel/lib/active_model/validator.rb, line 103 def self.kind @kind ||= name.split("::").last.underscore.chomp("_validator").to_sym unless anonymous? end ``` Returns the kind of the validator. ``` PresenceValidator.kind # => :presence AcceptanceValidator.kind # => :acceptance ``` new(options = {}) Show source ``` # File activemodel/lib/active_model/validator.rb, line 108 def initialize(options = {}) @options = options.except(:class).freeze end ``` Accepts options that will be made available through the `options` reader. kind() Show source ``` # File activemodel/lib/active_model/validator.rb, line 116 def kind self.class.kind end ``` Returns the kind for this validator. ``` PresenceValidator.new(attributes: [:username]).kind # => :presence AcceptanceValidator.new(attributes: [:terms]).kind # => :acceptance ``` validate(record) Show source ``` # File activemodel/lib/active_model/validator.rb, line 122 def validate(record) raise NotImplementedError, "Subclasses must implement a validate(record) method." end ``` Override this method in subclasses with validation logic, adding errors to the records `errors` array where necessary.
programming_docs
rails module ActiveModel::Translation module ActiveModel::Translation ================================ Included modules: [ActiveModel::Naming](naming) Active Model Translation ------------------------ Provides integration between your object and the Rails internationalization (i18n) framework. A minimal implementation could be: ``` class TranslatedPerson extend ActiveModel::Translation end TranslatedPerson.human_attribute_name('my_attribute') # => "My attribute" ``` This also provides the required class methods for hooking into the Rails internationalization [`API`](api), including being able to define a class-based `i18n_scope` and `lookup_ancestors` to find translations in parent classes. human\_attribute\_name(attribute, options = {}) Show source ``` # File activemodel/lib/active_model/translation.rb, line 44 def human_attribute_name(attribute, options = {}) options = { count: 1 }.merge!(options) parts = attribute.to_s.split(".") attribute = parts.pop namespace = parts.join("/") unless parts.empty? attributes_scope = "#{i18n_scope}.attributes" if namespace defaults = lookup_ancestors.map do |klass| :"#{attributes_scope}.#{klass.model_name.i18n_key}/#{namespace}.#{attribute}" end defaults << :"#{attributes_scope}.#{namespace}.#{attribute}" else defaults = lookup_ancestors.map do |klass| :"#{attributes_scope}.#{klass.model_name.i18n_key}.#{attribute}" end end defaults << :"attributes.#{attribute}" defaults << options.delete(:default) if options[:default] defaults << attribute.humanize options[:default] = defaults I18n.translate(defaults.shift, **options) end ``` Transforms attribute names into a more human format, such as “First name” instead of “first\_name”. ``` Person.human_attribute_name("first_name") # => "First name" ``` Specify `options` with additional translating options. i18n\_scope() Show source ``` # File activemodel/lib/active_model/translation.rb, line 26 def i18n_scope :activemodel end ``` Returns the `i18n_scope` for the class. Overwrite if you want custom lookup. lookup\_ancestors() Show source ``` # File activemodel/lib/active_model/translation.rb, line 34 def lookup_ancestors ancestors.select { |x| x.respond_to?(:model_name) } end ``` When localizing a string, it goes through the lookup returned by this method, which is used in [`ActiveModel::Name#human`](name#method-i-human), [`ActiveModel::Errors#full_messages`](errors#method-i-full_messages) and [`ActiveModel::Translation#human_attribute_name`](translation#method-i-human_attribute_name). rails module ActiveModel::Validations module ActiveModel::Validations ================================ Included modules: [ActiveModel::Validations::HelperMethods](validations/helpermethods) Active Model Validations ------------------------ Provides a full validation framework to your objects. A minimal implementation could be: ``` class Person include ActiveModel::Validations attr_accessor :first_name, :last_name validates_each :first_name, :last_name do |record, attr, value| record.errors.add attr, "starts with z." if value.start_with?("z") end end ``` Which provides you with the full standard validation stack that you know from Active Record: ``` person = Person.new person.valid? # => true person.invalid? # => false person.first_name = 'zoolander' person.valid? # => false person.invalid? # => true person.errors.messages # => {first_name:["starts with z."]} ``` Note that `ActiveModel::Validations` automatically adds an `errors` method to your instances initialized with a new `ActiveModel::Errors` object, so there is no need for you to do this manually. errors() Show source ``` # File activemodel/lib/active_model/validations.rb, line 301 def errors @errors ||= Errors.new(self) end ``` Returns the `Errors` object that holds all information about attribute error messages. ``` class Person include ActiveModel::Validations attr_accessor :name validates_presence_of :name end person = Person.new person.valid? # => false person.errors # => #<ActiveModel::Errors:0x007fe603816640 @messages={name:["can't be blank"]}> ``` invalid?(context = nil) Show source ``` # File activemodel/lib/active_model/validations.rb, line 373 def invalid?(context = nil) !valid?(context) end ``` Performs the opposite of `valid?`. Returns `true` if errors were added, `false` otherwise. ``` class Person include ActiveModel::Validations attr_accessor :name validates_presence_of :name end person = Person.new person.name = '' person.invalid? # => true person.name = 'david' person.invalid? # => false ``` Context can optionally be supplied to define which callbacks to test against (the context is defined on the validations using `:on`). ``` class Person include ActiveModel::Validations attr_accessor :name validates_presence_of :name, on: :new end person = Person.new person.invalid? # => false person.invalid?(:new) # => true ``` valid?(context = nil) Show source ``` # File activemodel/lib/active_model/validations.rb, line 334 def valid?(context = nil) current_context, self.validation_context = validation_context, context errors.clear run_validations! ensure self.validation_context = current_context end ``` Runs all the specified validations and returns `true` if no errors were added otherwise `false`. ``` class Person include ActiveModel::Validations attr_accessor :name validates_presence_of :name end person = Person.new person.name = '' person.valid? # => false person.name = 'david' person.valid? # => true ``` Context can optionally be supplied to define which callbacks to test against (the context is defined on the validations using `:on`). ``` class Person include ActiveModel::Validations attr_accessor :name validates_presence_of :name, on: :new end person = Person.new person.valid? # => true person.valid?(:new) # => false ``` Also aliased as: [validate](validations#method-i-validate) validate(context = nil) Alias for: [valid?](validations#method-i-valid-3F) validate!(context = nil) Show source ``` # File activemodel/lib/active_model/validations.rb, line 382 def validate!(context = nil) valid?(context) || raise_validation_error end ``` Runs all the validations within the specified context. Returns `true` if no errors are found, raises `ValidationError` otherwise. [`Validations`](validations) with no `:on` option will run no matter the context. [`Validations`](validations) with some `:on` option will only run in the specified context. validates\_with(\*args, &block) Show source ``` # File activemodel/lib/active_model/validations/with.rb, line 137 def validates_with(*args, &block) options = args.extract_options! options[:class] = self.class args.each do |klass| validator = klass.new(options, &block) validator.validate(self) end end ``` Passes the record off to the class or classes specified and allows them to add errors based on more complex conditions. ``` class Person include ActiveModel::Validations validate :instance_validations def instance_validations validates_with MyValidator end end ``` Please consult the class method documentation for more information on creating your own validator. You may also pass it multiple classes, like so: ``` class Person include ActiveModel::Validations validate :instance_validations, on: :create def instance_validations validates_with MyValidator, MyOtherValidator end end ``` Standard configuration options (`:on`, `:if` and `:unless`), which are available on the class version of `validates_with`, should instead be placed on the `validates` method as these are applied and tested in the callback. If you pass any additional configuration options, they will be passed to the class and available as `options`, please refer to the class version of this method for more information. raise\_validation\_error() Show source ``` # File activemodel/lib/active_model/validations.rb, line 410 def raise_validation_error # :doc: raise(ValidationError.new(self)) end ``` rails module ActiveModel::SecurePassword module ActiveModel::SecurePassword =================================== MAX\_PASSWORD\_LENGTH\_ALLOWED BCrypt hash function can handle maximum 72 bytes, and if we pass password of length more than 72 bytes it ignores extra characters. Hence need to put a restriction on password length. rails class ActiveModel::ValidationError class ActiveModel::ValidationError =================================== Parent: StandardError Active [`Model`](model) [`ValidationError`](validationerror) ============================================================ Raised by `validate!` when the model is invalid. Use the `model` method to retrieve the record which did not validate. ``` begin complex_operation_that_internally_calls_validate! rescue ActiveModel::ValidationError => invalid puts invalid.model.errors end ``` model[R] new(model) Show source ``` # File activemodel/lib/active_model/validations.rb, line 428 def initialize(model) @model = model errors = @model.errors.full_messages.join(", ") super(I18n.t(:"#{@model.class.i18n_scope}.errors.messages.model_invalid", errors: errors, default: :"errors.messages.model_invalid")) end ``` Calls superclass method rails class ActiveModel::Name class ActiveModel::Name ======================== Parent: [Object](../object) Included modules: cache\_key[RW] collection[RW] element[RW] i18n\_key[RW] name[RW] param\_key[RW] plural[RW] route\_key[RW] singular[RW] singular\_route\_key[RW] new(klass, namespace = nil, name = nil, locale = :en) Show source ``` # File activemodel/lib/active_model/naming.rb, line 166 def initialize(klass, namespace = nil, name = nil, locale = :en) @name = name || klass.name raise ArgumentError, "Class name cannot be blank. You need to supply a name argument when anonymous class given" if @name.blank? @unnamespaced = @name.delete_prefix("#{namespace.name}::") if namespace @klass = klass @singular = _singularize(@name) @plural = ActiveSupport::Inflector.pluralize(@singular, locale) @uncountable = @plural == @singular @element = ActiveSupport::Inflector.underscore(ActiveSupport::Inflector.demodulize(@name)) @human = ActiveSupport::Inflector.humanize(@element) @collection = ActiveSupport::Inflector.tableize(@name) @param_key = (namespace ? _singularize(@unnamespaced) : @singular) @i18n_key = @name.underscore.to_sym @route_key = (namespace ? ActiveSupport::Inflector.pluralize(@param_key, locale) : @plural.dup) @singular_route_key = ActiveSupport::Inflector.singularize(@route_key, locale) @route_key << "_index" if @uncountable end ``` Returns a new [`ActiveModel::Name`](name) instance. By default, the `namespace` and `name` option will take the namespace and name of the given class respectively. Use `locale` argument for singularize and pluralize model name. ``` module Foo class Bar end end ActiveModel::Name.new(Foo::Bar).to_s # => "Foo::Bar" ``` !~(regexp) Show source ``` # File activemodel/lib/active_model/naming.rb, line 83 ``` Equivalent to `String#!~`. Match the class name against the given regexp. Returns `true` if there is no match, otherwise `false`. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name !~ /Post/ # => false BlogPost.model_name !~ /\d/ # => true ``` <→(other) Show source ``` # File activemodel/lib/active_model/naming.rb, line 50 ``` Equivalent to `String#<=>`. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name <=> 'BlogPost' # => 0 BlogPost.model_name <=> 'Blog' # => 1 BlogPost.model_name <=> 'BlogPosts' # => -1 ``` ==(other) Show source ``` # File activemodel/lib/active_model/naming.rb, line 19 ``` Equivalent to `String#==`. Returns `true` if the class name and `other` are equal, otherwise `false`. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name == 'BlogPost' # => true BlogPost.model_name == 'Blog Post' # => false ``` ===(other) Show source ``` # File activemodel/lib/active_model/naming.rb, line 35 ``` Equivalent to `#==`. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name === 'BlogPost' # => true BlogPost.model_name === 'Blog Post' # => false ``` =~(regexp) Show source ``` # File activemodel/lib/active_model/naming.rb, line 66 ``` Equivalent to `String#=~`. Match the class name against the given regexp. Returns the position where the match starts or `nil` if there is no match. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name =~ /Post/ # => 4 BlogPost.model_name =~ /\d/ # => nil ``` eql?(other) Show source ``` # File activemodel/lib/active_model/naming.rb, line 99 ``` Equivalent to `String#eql?`. Returns `true` if the class name and `other` have the same length and content, otherwise `false`. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name.eql?('BlogPost') # => true BlogPost.model_name.eql?('Blog Post') # => false ``` human(options = {}) Show source ``` # File activemodel/lib/active_model/naming.rb, line 197 def human(options = {}) return @human unless @klass.respond_to?(:lookup_ancestors) && @klass.respond_to?(:i18n_scope) defaults = @klass.lookup_ancestors.map do |klass| klass.model_name.i18n_key end defaults << options[:default] if options[:default] defaults << @human options = { scope: [@klass.i18n_scope, :models], count: 1, default: defaults }.merge!(options.except(:default)) I18n.translate(defaults.shift, **options) end ``` Transform the model name into a more human format, using I18n. By default, it will underscore then humanize the class name. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name.human # => "Blog post" ``` Specify `options` with additional translating options. match?(regexp) Show source ``` # File activemodel/lib/active_model/naming.rb, line 115 ``` Equivalent to `String#match?`. Match the class name against the given regexp. Returns `true` if there is a match, otherwise `false`. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name.match?(/Post/) # => true BlogPost.model_name.match?(/\d/) # => false ``` to\_s() Show source ``` # File activemodel/lib/active_model/naming.rb, line 131 ``` Returns the class name. ``` class BlogPost extend ActiveModel::Naming end BlogPost.model_name.to_s # => "BlogPost" ``` to\_str() Show source ``` # File activemodel/lib/active_model/naming.rb, line 151 delegate :==, :===, :<=>, :=~, :"!~", :eql?, :match?, :to_s, :to_str, :as_json, to: :name ``` Equivalent to `to_s`. uncountable?() Show source ``` # File activemodel/lib/active_model/naming.rb, line 212 def uncountable? @uncountable end ``` rails class ActiveModel::Errors class ActiveModel::Errors ========================== Parent: [Object](../object) Included modules: [Enumerable](../enumerable) Active Model Errors ------------------- Provides error related functionalities you can include in your object for handling error messages and interacting with Action View helpers. A minimal implementation could be: ``` class Person # Required dependency for ActiveModel::Errors extend ActiveModel::Naming def initialize @errors = ActiveModel::Errors.new(self) end attr_accessor :name attr_reader :errors def validate! errors.add(:name, :blank, message: "cannot be nil") if name.nil? end # The following methods are needed to be minimally implemented def read_attribute_for_validation(attr) send(attr) end def self.human_attribute_name(attr, options = {}) attr end def self.lookup_ancestors [self] end end ``` The last three methods are required in your object for `Errors` to be able to generate error messages correctly and also handle multiple languages. Of course, if you extend your object with `ActiveModel::Translation` you will not need to implement the last two. Likewise, using `ActiveModel::Validations` will handle the validation related methods for you. The above allows you to do: ``` person = Person.new person.validate! # => ["cannot be nil"] person.errors.full_messages # => ["name cannot be nil"] # etc.. ``` errors[R] The actual array of `Error` objects This method is aliased to `objects`. objects[R] The actual array of `Error` objects This method is aliased to `objects`. new(base) Show source ``` # File activemodel/lib/active_model/errors.rb, line 92 def initialize(base) @base = base @errors = [] end ``` Pass in the instance of the object that is using the errors object. ``` class Person def initialize @errors = ActiveModel::Errors.new(self) end end ``` [](attribute) Show source ``` # File activemodel/lib/active_model/errors.rb, line 197 def [](attribute) messages_for(attribute) end ``` When passed a symbol or a name of a method, returns an array of errors for the method. ``` person.errors[:name] # => ["cannot be nil"] person.errors['name'] # => ["cannot be nil"] ``` add(attribute, type = :invalid, \*\*options) Show source ``` # File activemodel/lib/active_model/errors.rb, line 310 def add(attribute, type = :invalid, **options) attribute, type, options = normalize_arguments(attribute, type, **options) error = Error.new(@base, attribute, type, **options) if exception = options[:strict] exception = ActiveModel::StrictValidationFailed if exception == true raise exception, error.full_message end @errors.append(error) error end ``` Adds a new error of `type` on `attribute`. More than one error can be added to the same `attribute`. If no `type` is supplied, `:invalid` is assumed. ``` person.errors.add(:name) # Adds <#ActiveModel::Error attribute=name, type=invalid> person.errors.add(:name, :not_implemented, message: "must be implemented") # Adds <#ActiveModel::Error attribute=name, type=not_implemented, options={:message=>"must be implemented"}> person.errors.messages # => {:name=>["is invalid", "must be implemented"]} ``` If `type` is a string, it will be used as error message. If `type` is a symbol, it will be translated using the appropriate scope (see `generate_message`). ``` person.errors.add(:name, :blank) person.errors.messages # => {:name=>["can't be blank"]} person.errors.add(:name, :too_long, { count: 25 }) person.errors.messages # => ["is too long (maximum is 25 characters)"] ``` If `type` is a proc, it will be called, allowing for things like `Time.now` to be used within an error. If the `:strict` option is set to `true`, it will raise [`ActiveModel::StrictValidationFailed`](strictvalidationfailed) instead of adding the error. `:strict` option can also be set to any other exception. ``` person.errors.add(:name, :invalid, strict: true) # => ActiveModel::StrictValidationFailed: Name is invalid person.errors.add(:name, :invalid, strict: NameIsInvalid) # => NameIsInvalid: Name is invalid person.errors.messages # => {} ``` `attribute` should be set to `:base` if the error is not directly associated with a single attribute. ``` person.errors.add(:base, :name_or_email_blank, message: "either name or email must be present") person.errors.messages # => {:base=>["either name or email must be present"]} person.errors.details # => {:base=>[{error: :name_or_email_blank}]} ``` added?(attribute, type = :invalid, options = {}) Show source ``` # File activemodel/lib/active_model/errors.rb, line 340 def added?(attribute, type = :invalid, options = {}) attribute, type, options = normalize_arguments(attribute, type, **options) if type.is_a? Symbol @errors.any? { |error| error.strict_match?(attribute, type, **options) } else messages_for(attribute).include?(type) end end ``` Returns `true` if an error matches provided `attribute` and `type`, or `false` otherwise. `type` is treated the same as for `add`. ``` person.errors.add :name, :blank person.errors.added? :name, :blank # => true person.errors.added? :name, "can't be blank" # => true ``` If the error requires options, then it returns `true` with the correct options, or `false` with incorrect or missing options. ``` person.errors.add :name, :too_long, { count: 25 } person.errors.added? :name, :too_long, count: 25 # => true person.errors.added? :name, "is too long (maximum is 25 characters)" # => true person.errors.added? :name, :too_long, count: 24 # => false person.errors.added? :name, :too_long # => false person.errors.added? :name, "is too long" # => false ``` as\_json(options = nil) Show source ``` # File activemodel/lib/active_model/errors.rb, line 215 def as_json(options = nil) to_hash(options && options[:full_messages]) end ``` Returns a [`Hash`](../hash) that can be used as the JSON representation for this object. You can pass the `:full_messages` option. This determines if the json object should contain full messages or not (false by default). ``` person.errors.as_json # => {:name=>["cannot be nil"]} person.errors.as_json(full_messages: true) # => {:name=>["name cannot be nil"]} ``` attribute\_names() Show source ``` # File activemodel/lib/active_model/errors.rb, line 205 def attribute_names @errors.map(&:attribute).uniq.freeze end ``` Returns all error attribute names ``` person.errors.messages # => {:name=>["cannot be nil", "must be specified"]} person.errors.attribute_names # => [:name] ``` delete(attribute, type = nil, \*\*options) Show source ``` # File activemodel/lib/active_model/errors.rb, line 183 def delete(attribute, type = nil, **options) attribute, type, options = normalize_arguments(attribute, type, **options) matches = where(attribute, type, **options) matches.each do |error| @errors.delete(error) end matches.map(&:message).presence end ``` Delete messages for `key`. Returns the deleted messages. ``` person.errors[:name] # => ["cannot be nil"] person.errors.delete(:name) # => ["cannot be nil"] person.errors[:name] # => [] ``` details() Show source ``` # File activemodel/lib/active_model/errors.rb, line 244 def details hash = group_by_attribute.transform_values do |errors| errors.map(&:details) end hash.default = EMPTY_ARRAY hash.freeze hash end ``` Returns a [`Hash`](../hash) of attributes with an array of their error details. full\_message(attribute, message) Show source ``` # File activemodel/lib/active_model/errors.rb, line 419 def full_message(attribute, message) Error.full_message(attribute, message, @base) end ``` Returns a full message for a given attribute. ``` person.errors.full_message(:name, 'is invalid') # => "Name is invalid" ``` full\_messages() Show source ``` # File activemodel/lib/active_model/errors.rb, line 383 def full_messages @errors.map(&:full_message) end ``` Returns all the full error messages in an array. ``` class Person validates_presence_of :name, :address, :email validates_length_of :name, in: 5..30 end person = Person.create(address: '123 First St.') person.errors.full_messages # => ["Name is too short (minimum is 5 characters)", "Name can't be blank", "Email can't be blank"] ``` Also aliased as: [to\_a](errors#method-i-to_a) full\_messages\_for(attribute) Show source ``` # File activemodel/lib/active_model/errors.rb, line 398 def full_messages_for(attribute) where(attribute).map(&:full_message).freeze end ``` Returns all the full error messages for a given attribute in an array. ``` class Person validates_presence_of :name, :email validates_length_of :name, in: 5..30 end person = Person.create() person.errors.full_messages_for(:name) # => ["Name is too short (minimum is 5 characters)", "Name can't be blank"] ``` generate\_message(attribute, type = :invalid, options = {}) Show source ``` # File activemodel/lib/active_model/errors.rb, line 447 def generate_message(attribute, type = :invalid, options = {}) Error.generate_message(attribute, type, @base, options) end ``` Translates an error message in its default scope (`activemodel.errors.messages`). [`Error`](error) messages are first looked up in `activemodel.errors.models.MODEL.attributes.ATTRIBUTE.MESSAGE`, if it's not there, it's looked up in `activemodel.errors.models.MODEL.MESSAGE` and if that is not there also, it returns the translation of the default message (e.g. `activemodel.errors.messages.MESSAGE`). The translated model name, translated attribute name and the value are available for interpolation. When using inheritance in your models, it will check all the inherited models too, but only if the model itself hasn't been found. Say you have `class Admin < User; end` and you wanted the translation for the `:blank` error message for the `title` attribute, it looks for these translations: * `activemodel.errors.models.admin.attributes.title.blank` * `activemodel.errors.models.admin.blank` * `activemodel.errors.models.user.attributes.title.blank` * `activemodel.errors.models.user.blank` * any default you provided through the `options` hash (in the `activemodel.errors` scope) * `activemodel.errors.messages.blank` * `errors.attributes.title.blank` * `errors.messages.blank` group\_by\_attribute() Show source ``` # File activemodel/lib/active_model/errors.rb, line 257 def group_by_attribute @errors.group_by(&:attribute) end ``` Returns a [`Hash`](../hash) of attributes with an array of their [`Error`](error) objects. ``` person.errors.group_by_attribute # => {:name=>[<#ActiveModel::Error>, <#ActiveModel::Error>]} ``` has\_key?(attribute) Alias for: [include?](errors#method-i-include-3F) import(error, override\_options = {}) Show source ``` # File activemodel/lib/active_model/errors.rb, line 125 def import(error, override_options = {}) [:attribute, :type].each do |key| if override_options.key?(key) override_options[key] = override_options[key].to_sym end end @errors.append(NestedError.new(@base, error, override_options)) end ``` Imports one error Imported errors are wrapped as a `NestedError`, providing access to original error object. If attribute or type needs to be overridden, use `override_options`. override\_options - [`Hash`](../hash) @option override\_options [Symbol] :attribute Override the attribute the error belongs to @option override\_options [Symbol] :type Override type of the error. include?(attribute) Show source ``` # File activemodel/lib/active_model/errors.rb, line 170 def include?(attribute) @errors.any? { |error| error.match?(attribute.to_sym) } end ``` Returns `true` if the error messages include an error for the given key `attribute`, `false` otherwise. ``` person.errors.messages # => {:name=>["cannot be nil"]} person.errors.include?(:name) # => true person.errors.include?(:age) # => false ``` Also aliased as: [has\_key?](errors#method-i-has_key-3F), [key?](errors#method-i-key-3F) key?(attribute) Alias for: [include?](errors#method-i-include-3F) merge!(other) Show source ``` # File activemodel/lib/active_model/errors.rb, line 142 def merge!(other) return errors if equal?(other) other.errors.each { |error| import(error) } end ``` Merges the errors from `other`, each `Error` wrapped as `NestedError`. other - The [`ActiveModel::Errors`](errors) instance. Examples ``` person.errors.merge!(other) ``` messages() Show source ``` # File activemodel/lib/active_model/errors.rb, line 236 def messages hash = to_hash hash.default = EMPTY_ARRAY hash.freeze hash end ``` Returns a [`Hash`](../hash) of attributes with an array of their error messages. messages\_for(attribute) Show source ``` # File activemodel/lib/active_model/errors.rb, line 412 def messages_for(attribute) where(attribute).map(&:message) end ``` Returns all the error messages for a given attribute in an array. ``` class Person validates_presence_of :name, :email validates_length_of :name, in: 5..30 end person = Person.create() person.errors.messages_for(:name) # => ["is too short (minimum is 5 characters)", "can't be blank"] ``` of\_kind?(attribute, type = :invalid) Show source ``` # File activemodel/lib/active_model/errors.rb, line 363 def of_kind?(attribute, type = :invalid) attribute, type = normalize_arguments(attribute, type) if type.is_a? Symbol !where(attribute, type).empty? else messages_for(attribute).include?(type) end end ``` Returns `true` if an error on the attribute with the given type is present, or `false` otherwise. `type` is treated the same as for `add`. ``` person.errors.add :age person.errors.add :name, :too_long, { count: 25 } person.errors.of_kind? :age # => true person.errors.of_kind? :name # => false person.errors.of_kind? :name, :too_long # => true person.errors.of_kind? :name, "is too long (maximum is 25 characters)" # => true person.errors.of_kind? :name, :not_too_long # => false person.errors.of_kind? :name, "is too long" # => false ``` to\_a() Alias for: [full\_messages](errors#method-i-full_messages) to\_hash(full\_messages = false) Show source ``` # File activemodel/lib/active_model/errors.rb, line 224 def to_hash(full_messages = false) message_method = full_messages ? :full_message : :message group_by_attribute.transform_values do |errors| errors.map(&message_method) end end ``` Returns a [`Hash`](../hash) of attributes with their error messages. If `full_messages` is `true`, it will contain full messages (see `full_message`). ``` person.errors.to_hash # => {:name=>["cannot be nil"]} person.errors.to_hash(true) # => {:name=>["name cannot be nil"]} ``` where(attribute, type = nil, \*\*options) Show source ``` # File activemodel/lib/active_model/errors.rb, line 157 def where(attribute, type = nil, **options) attribute, type, options = normalize_arguments(attribute, type, **options) @errors.select { |error| error.match?(attribute, type, **options) } end ``` Search for errors matching `attribute`, `type` or `options`. Only supplied params will be matched. ``` person.errors.where(:name) # => all name errors. person.errors.where(:name, :too_short) # => all name errors being too short person.errors.where(:name, :too_short, minimum: 2) # => all name errors being too short and minimum is 2 ```
programming_docs
rails class ActiveModel::ForbiddenAttributesError class ActiveModel::ForbiddenAttributesError ============================================ Parent: StandardError Raised when forbidden attributes are used for mass assignment. ``` class Person < ActiveRecord::Base end params = ActionController::Parameters.new(name: 'Bob') Person.new(params) # => ActiveModel::ForbiddenAttributesError params.permit! Person.new(params) # => #<Person id: nil, name: "Bob"> ``` rails module ActiveModel::API module ActiveModel::API ======================== Included modules: [ActiveModel::AttributeAssignment](attributeassignment), [ActiveModel::Validations](validations), [ActiveModel::Conversion](conversion) Active Model API ---------------- Includes the required interface for an object to interact with Action Pack and Action View, using different Active [`Model`](model) modules. It includes model name introspections, conversions, translations and validations. Besides that, it allows you to initialize the object with a hash of attributes, pretty much like Active Record does. A minimal implementation could be: ``` class Person include ActiveModel::API attr_accessor :name, :age end person = Person.new(name: 'bob', age: '18') person.name # => "bob" person.age # => "18" ``` Note that, by default, `ActiveModel::API` implements `persisted?` to return `false`, which is the most common case. You may want to override it in your class to simulate a different scenario: ``` class Person include ActiveModel::API attr_accessor :id, :name def persisted? self.id.present? end end person = Person.new(id: 1, name: 'bob') person.persisted? # => true ``` Also, if for some reason you need to run code on `initialize`, make sure you call `super` if you want the attributes hash initialization to happen. ``` class Person include ActiveModel::API attr_accessor :id, :name, :omg def initialize(attributes={}) super @omg ||= true end end person = Person.new(id: 1, name: 'bob') person.omg # => true ``` For more detailed information on other functionalities available, please refer to the specific modules included in `ActiveModel::API` (see below). new(attributes = {}) Show source ``` # File activemodel/lib/active_model/api.rb, line 80 def initialize(attributes = {}) assign_attributes(attributes) if attributes super() end ``` Initializes a new model with the given `params`. ``` class Person include ActiveModel::API attr_accessor :name, :age end person = Person.new(name: 'bob', age: '18') person.name # => "bob" person.age # => "18" ``` Calls superclass method persisted?() Show source ``` # File activemodel/lib/active_model/api.rb, line 95 def persisted? false end ``` Indicates if the model is persisted. Default is `false`. ``` class Person include ActiveModel::API attr_accessor :id, :name end person = Person.new(id: 1, name: 'bob') person.persisted? # => false ``` rails class ActiveModel::Error class ActiveModel::Error ========================= Parent: [Object](../object) Active Model Error ------------------ Represents one single error CALLBACKS\_OPTIONS MESSAGE\_OPTIONS attribute[R] The attribute of `base` which the error belongs to base[R] The object which the error belongs to options[R] The options provided when calling +errors#add+ raw\_type[R] The raw value provided as the second parameter when calling +errors#add+ type[R] The type of error, defaults to `:invalid` unless specified new(base, attribute, type = :invalid, \*\*options) Show source ``` # File activemodel/lib/active_model/error.rb, line 103 def initialize(base, attribute, type = :invalid, **options) @base = base @attribute = attribute @raw_type = type @type = type || :invalid @options = options end ``` detail() Alias for: [details](error#method-i-details) details() Show source ``` # File activemodel/lib/active_model/error.rb, line 148 def details { error: raw_type }.merge(options.except(*CALLBACKS_OPTIONS + MESSAGE_OPTIONS)) end ``` Returns the error details. ``` error = ActiveModel::Error.new(person, :name, :too_short, count: 5) error.details # => { error: :too_short, count: 5 } ``` Also aliased as: [detail](error#method-i-detail) full\_message() Show source ``` # File activemodel/lib/active_model/error.rb, line 158 def full_message self.class.full_message(attribute, message, @base) end ``` Returns the full error message. ``` error = ActiveModel::Error.new(person, :name, :too_short, count: 5) error.full_message # => "Name is too short (minimum is 5 characters)" ``` match?(attribute, type = nil, \*\*options) Show source ``` # File activemodel/lib/active_model/error.rb, line 165 def match?(attribute, type = nil, **options) if @attribute != attribute || (type && @type != type) return false end options.each do |key, value| if @options[key] != value return false end end true end ``` See if error matches provided `attribute`, `type` and `options`. Omitted params are not checked for a match. message() Show source ``` # File activemodel/lib/active_model/error.rb, line 134 def message case raw_type when Symbol self.class.generate_message(attribute, raw_type, @base, options.except(*CALLBACKS_OPTIONS)) else raw_type end end ``` Returns the error message. ``` error = ActiveModel::Error.new(person, :name, :too_short, count: 5) error.message # => "is too short (minimum is 5 characters)" ``` strict\_match?(attribute, type, \*\*options) Show source ``` # File activemodel/lib/active_model/error.rb, line 183 def strict_match?(attribute, type, **options) return false unless match?(attribute, type) options == @options.except(*CALLBACKS_OPTIONS + MESSAGE_OPTIONS) end ``` See if error matches provided `attribute`, `type` and `options` exactly. All params must be equal to Error's own attributes to be considered a strict match. attributes\_for\_hash() Show source ``` # File activemodel/lib/active_model/error.rb, line 203 def attributes_for_hash [@base, @attribute, @raw_type, @options.except(*CALLBACKS_OPTIONS)] end ``` rails module ActiveModel::Naming module ActiveModel::Naming =========================== Active Model Naming ------------------- Creates a `model_name` method on your object. To implement, just extend [`ActiveModel::Naming`](naming) in your object: ``` class BookCover extend ActiveModel::Naming end BookCover.model_name.name # => "BookCover" BookCover.model_name.human # => "Book cover" BookCover.model_name.i18n_key # => :book_cover BookModule::BookCover.model_name.i18n_key # => :"book_module/book_cover" ``` Providing the functionality that [`ActiveModel::Naming`](naming) provides in your object is required to pass the Active Model `Lint` test. So either extending the provided method below, or rolling your own is required. param\_key(record\_or\_class) Show source ``` # File activemodel/lib/active_model/naming.rb, line 327 def self.param_key(record_or_class) model_name_from_record_or_class(record_or_class).param_key end ``` Returns string to use for params names. It differs for namespaced models regarding whether it's inside isolated engine. ``` # For isolated engine: ActiveModel::Naming.param_key(Blog::Post) # => "post" # For shared engine: ActiveModel::Naming.param_key(Blog::Post) # => "blog_post" ``` plural(record\_or\_class) Show source ``` # File activemodel/lib/active_model/naming.rb, line 272 def self.plural(record_or_class) model_name_from_record_or_class(record_or_class).plural end ``` Returns the plural class name of a record or class. ``` ActiveModel::Naming.plural(post) # => "posts" ActiveModel::Naming.plural(Highrise::Person) # => "highrise_people" ``` route\_key(record\_or\_class) Show source ``` # File activemodel/lib/active_model/naming.rb, line 315 def self.route_key(record_or_class) model_name_from_record_or_class(record_or_class).route_key end ``` Returns string to use while generating route names. It differs for namespaced models regarding whether it's inside isolated engine. ``` # For isolated engine: ActiveModel::Naming.route_key(Blog::Post) # => "posts" # For shared engine: ActiveModel::Naming.route_key(Blog::Post) # => "blog_posts" ``` The route key also considers if the noun is uncountable and, in such cases, automatically appends \_index. singular(record\_or\_class) Show source ``` # File activemodel/lib/active_model/naming.rb, line 280 def self.singular(record_or_class) model_name_from_record_or_class(record_or_class).singular end ``` Returns the singular class name of a record or class. ``` ActiveModel::Naming.singular(post) # => "post" ActiveModel::Naming.singular(Highrise::Person) # => "highrise_person" ``` singular\_route\_key(record\_or\_class) Show source ``` # File activemodel/lib/active_model/naming.rb, line 300 def self.singular_route_key(record_or_class) model_name_from_record_or_class(record_or_class).singular_route_key end ``` Returns string to use while generating route names. It differs for namespaced models regarding whether it's inside isolated engine. ``` # For isolated engine: ActiveModel::Naming.singular_route_key(Blog::Post) # => "post" # For shared engine: ActiveModel::Naming.singular_route_key(Blog::Post) # => "blog_post" ``` uncountable?(record\_or\_class) Show source ``` # File activemodel/lib/active_model/naming.rb, line 288 def self.uncountable?(record_or_class) model_name_from_record_or_class(record_or_class).uncountable? end ``` Identifies whether the class name of a record or class is uncountable. ``` ActiveModel::Naming.uncountable?(Sheep) # => true ActiveModel::Naming.uncountable?(Post) # => false ``` model\_name() Show source ``` # File activemodel/lib/active_model/naming.rb, line 259 def model_name @_model_name ||= begin namespace = module_parents.detect do |n| n.respond_to?(:use_relative_model_naming?) && n.use_relative_model_naming? end ActiveModel::Name.new(self, namespace) end end ``` Returns an [`ActiveModel::Name`](name) object for module. It can be used to retrieve all kinds of naming-related information (See [`ActiveModel::Name`](name) for more information). ``` class Person extend ActiveModel::Naming end Person.model_name.name # => "Person" Person.model_name.class # => ActiveModel::Name Person.model_name.singular # => "person" Person.model_name.plural # => "people" ``` rails class ActiveModel::EachValidator class ActiveModel::EachValidator ================================= Parent: [ActiveModel::Validator](validator) `EachValidator` is a validator which iterates through the attributes given in the options hash invoking the `validate_each` method passing in the record, attribute and value. All Active Model validations are built on top of this validator. attributes[R] new(options) Show source ``` # File activemodel/lib/active_model/validator.rb, line 138 def initialize(options) @attributes = Array(options.delete(:attributes)) raise ArgumentError, ":attributes cannot be blank" if @attributes.empty? super check_validity! end ``` Returns a new validator instance. All options will be available via the `options` reader, however the `:attributes` option will be removed and instead be made available through the `attributes` reader. Calls superclass method [`ActiveModel::Validator::new`](validator#method-c-new) check\_validity!() Show source ``` # File activemodel/lib/active_model/validator.rb, line 166 def check_validity! end ``` Hook method that gets called by the initializer allowing verification that the arguments supplied are valid. You could for example raise an `ArgumentError` when invalid options are supplied. validate(record) Show source ``` # File activemodel/lib/active_model/validator.rb, line 148 def validate(record) attributes.each do |attribute| value = record.read_attribute_for_validation(attribute) next if (value.nil? && options[:allow_nil]) || (value.blank? && options[:allow_blank]) value = prepare_value_for_validation(value, record, attribute) validate_each(record, attribute, value) end end ``` Performs validation on the supplied record. By default this will call `validate_each` to determine validity therefore subclasses should override `validate_each` with validation logic. validate\_each(record, attribute, value) Show source ``` # File activemodel/lib/active_model/validator.rb, line 159 def validate_each(record, attribute, value) raise NotImplementedError, "Subclasses must implement a validate_each(record, attribute, value) method" end ``` Override this method in subclasses with the validation logic, adding errors to the records `errors` array where necessary. rails class ActiveRecord::UnknownAttributeError class ActiveRecord::UnknownAttributeError ========================================== Parent: NoMethodError Raised when unknown attributes are supplied via mass assignment. ``` class Person include ActiveModel::AttributeAssignment include ActiveModel::Validations end person = Person.new person.assign_attributes(name: 'Gorby') # => ActiveModel::UnknownAttributeError: unknown attribute 'name' for Person. ``` attribute[R] record[R] new(record, attribute) Show source ``` # File activemodel/lib/active_model/errors.rb, line 503 def initialize(record, attribute) @record = record @attribute = attribute super("unknown attribute '#{attribute}' for #{@record.class}.") end ``` Calls superclass method rails module ActiveModel::Model module ActiveModel::Model ========================== Included modules: [ActiveModel::API](api) Active Model Basic Model ------------------------ Allows implementing models similar to `ActiveRecord::Base`. Includes `ActiveModel::API` for the required interface for an object to interact with Action Pack and Action View, but can be extended with other functionalities. A minimal implementation could be: ``` class Person include ActiveModel::Model attr_accessor :name, :age end person = Person.new(name: 'bob', age: '18') person.name # => "bob" person.age # => "18" ``` If for some reason you need to run code on `initialize`, make sure you call `super` if you want the attributes hash initialization to happen. ``` class Person include ActiveModel::Model attr_accessor :id, :name, :omg def initialize(attributes={}) super @omg ||= true end end person = Person.new(id: 1, name: 'bob') person.omg # => true ``` For more detailed information on other functionalities available, please refer to the specific modules included in `ActiveModel::Model` (see below). rails module ActiveModel::AttributeMethods module ActiveModel::AttributeMethods ===================================== Active Model Attribute Methods ------------------------------ Provides a way to add prefixes and suffixes to your methods as well as handling the creation of `ActiveRecord::Base`-like class methods such as `table_name`. The requirements to implement `ActiveModel::AttributeMethods` are to: * `include ActiveModel::AttributeMethods` in your class. * Call each of its methods you want to add, such as `attribute_method_suffix` or `attribute_method_prefix`. * Call `define_attribute_methods` after the other methods are called. * Define the various generic `_attribute` methods that you have declared. * Define an `attributes` method which returns a hash with each attribute name in your model as hash key and the attribute value as hash value. [`Hash`](../hash) keys must be strings. A minimal implementation could be: ``` class Person include ActiveModel::AttributeMethods attribute_method_affix prefix: 'reset_', suffix: '_to_default!' attribute_method_suffix '_contrived?' attribute_method_prefix 'clear_' define_attribute_methods :name attr_accessor :name def attributes { 'name' => @name } end private def attribute_contrived?(attr) true end def clear_attribute(attr) send("#{attr}=", nil) end def reset_attribute_to_default!(attr) send("#{attr}=", 'Default Name') end end ``` CALL\_COMPILABLE\_REGEXP FORWARD\_PARAMETERS NAME\_COMPILABLE\_REGEXP attribute\_missing(match, \*args, &block) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 467 def attribute_missing(match, *args, &block) __send__(match.target, match.attr_name, *args, &block) end ``` `attribute_missing` is like `method_missing`, but for attributes. When `method_missing` is called we check to see if there is a matching attribute method. If so, we tell `attribute_missing` to dispatch the attribute. This method can be overloaded to customize the behavior. method\_missing(method, \*args, &block) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 453 def method_missing(method, *args, &block) if respond_to_without_attributes?(method, true) super else match = matched_attribute_method(method.to_s) match ? attribute_missing(match, *args, &block) : super end end ``` Allows access to the object attributes, which are held in the hash returned by `attributes`, as though they were first-class methods. So a `Person` class with a `name` attribute can for example use `Person#name` and `Person#name=` and never directly use the attributes hash – except for multiple assignments with `ActiveRecord::Base#attributes=`. It's also possible to instantiate related objects, so a `Client` class belonging to the `clients` table with a `master_id` foreign key can instantiate master through `Client#master`. Calls superclass method respond\_to?(method, include\_private\_methods = false) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 475 def respond_to?(method, include_private_methods = false) if super true elsif !include_private_methods && super(method, true) # If we're here then we haven't found among non-private methods # but found among all methods. Which means that the given method is private. false else !matched_attribute_method(method.to_s).nil? end end ``` Calls superclass method Also aliased as: [respond\_to\_without\_attributes?](attributemethods#method-i-respond_to_without_attributes-3F) respond\_to\_without\_attributes?(method, include\_private\_methods = false) A `Person` instance with a `name` attribute can ask `person.respond_to?(:name)`, `person.respond_to?(:name=)`, and `person.respond_to?(:name?)` which will all return `true`. Alias for: [respond\_to?](attributemethods#method-i-respond_to-3F) rails module ActiveModel::Lint::Tests module ActiveModel::Lint::Tests ================================ Active Model Lint Tests ----------------------- You can test whether an object is compliant with the Active Model [`API`](../api) by including `ActiveModel::Lint::Tests` in your TestCase. It will include tests that tell you whether your object is fully compliant, or if not, which aspects of the [`API`](../api) are not implemented. Note an object is not required to implement all APIs in order to work with Action Pack. This module only intends to provide guidance in case you want all features out of the box. These tests do not attempt to determine the semantic correctness of the returned values. For instance, you could implement `valid?` to always return `true`, and the tests would pass. It is up to you to ensure that the values are semantically meaningful. Objects you pass in are expected to return a compliant object from a call to `to_model`. It is perfectly fine for `to_model` to return `self`. test\_errors\_aref() Show source ``` # File activemodel/lib/active_model/lint.rb, line 102 def test_errors_aref assert_respond_to model, :errors assert_equal [], model.errors[:hello], "errors#[] should return an empty Array" end ``` Passes if the object's model responds to `errors` and if calling `[](attribute)` on the result of this method returns an array. Fails otherwise. `errors[attribute]` is used to retrieve the errors of a model for a given attribute. If errors are present, the method should return an array of strings that are the errors for the attribute in question. If localization is used, the strings should be localized for the current locale. If no error is present, the method should return an empty array. test\_model\_naming() Show source ``` # File activemodel/lib/active_model/lint.rb, line 81 def test_model_naming assert_respond_to model.class, :model_name model_name = model.class.model_name assert_respond_to model_name, :to_str assert_respond_to model_name.human, :to_str assert_respond_to model_name.singular, :to_str assert_respond_to model_name.plural, :to_str assert_respond_to model, :model_name assert_equal model.model_name, model.class.model_name end ``` Passes if the object's model responds to `model_name` both as an instance method and as a class method, and if calling this method returns a string with some convenience methods: `:human`, `:singular` and `:plural`. Check [`ActiveModel::Naming`](../naming) for more information. test\_persisted?() Show source ``` # File activemodel/lib/active_model/lint.rb, line 70 def test_persisted? assert_respond_to model, :persisted? assert_boolean model.persisted?, "persisted?" end ``` Passes if the object's model responds to `persisted?` and if calling this method returns either `true` or `false`. Fails otherwise. `persisted?` is used when calculating the URL for an object. If the object is not persisted, a form for that object, for instance, will route to the create action. If it is persisted, a form for the object will route to the update action. test\_to\_key() Show source ``` # File activemodel/lib/active_model/lint.rb, line 31 def test_to_key assert_respond_to model, :to_key def model.persisted?() false end assert model.to_key.nil?, "to_key should return nil when `persisted?` returns false" end ``` Passes if the object's model responds to `to_key` and if calling this method returns `nil` when the object is not persisted. Fails otherwise. `to_key` returns an [`Enumerable`](../../enumerable) of all (primary) key attributes of the model, and is used to a generate unique DOM id for the object. test\_to\_param() Show source ``` # File activemodel/lib/active_model/lint.rb, line 46 def test_to_param assert_respond_to model, :to_param def model.to_key() [1] end def model.persisted?() false end assert model.to_param.nil?, "to_param should return nil when `persisted?` returns false" end ``` Passes if the object's model responds to `to_param` and if calling this method returns `nil` when the object is not persisted. Fails otherwise. `to_param` is used to represent the object's key in URLs. Implementers can decide to either raise an exception or provide a default in case the record uses a composite primary key. There are no tests for this behavior in lint because it doesn't make sense to force any of the possible implementation strategies on the implementer. test\_to\_partial\_path() Show source ``` # File activemodel/lib/active_model/lint.rb, line 58 def test_to_partial_path assert_respond_to model, :to_partial_path assert_kind_of String, model.to_partial_path end ``` Passes if the object's model responds to `to_partial_path` and if calling this method returns a string. Fails otherwise. `to_partial_path` is used for looking up partials. For example, a BlogPost model might return “blog\_posts/blog\_post”.
programming_docs
rails module ActiveModel::SecurePassword::ClassMethods module ActiveModel::SecurePassword::ClassMethods ================================================= Included modules: [ActiveModel::Validations](../validations) has\_secure\_password(attribute = :password, validations: true) Show source ``` # File activemodel/lib/active_model/secure_password.rb, line 61 def has_secure_password(attribute = :password, validations: true) # Load bcrypt gem only when has_secure_password is used. # This is to avoid ActiveModel (and by extension the entire framework) # being dependent on a binary library. begin require "bcrypt" rescue LoadError $stderr.puts "You don't have bcrypt installed in your application. Please add it to your Gemfile and run bundle install" raise end include InstanceMethodsOnActivation.new(attribute) if validations include ActiveModel::Validations # This ensures the model has a password by checking whether the password_digest # is present, so that this works with both new and existing records. However, # when there is an error, the message is added to the password attribute instead # so that the error message will make sense to the end-user. validate do |record| record.errors.add(attribute, :blank) unless record.public_send("#{attribute}_digest").present? end validates_length_of attribute, maximum: ActiveModel::SecurePassword::MAX_PASSWORD_LENGTH_ALLOWED validates_confirmation_of attribute, allow_blank: true end end ``` Adds methods to set and authenticate against a BCrypt password. This mechanism requires you to have a `XXX_digest` attribute. Where `XXX` is the attribute name of your desired password. The following validations are added automatically: * Password must be present on creation * Password length should be less than or equal to 72 bytes * Confirmation of password (using a `XXX_confirmation` attribute) If confirmation validation is not needed, simply leave out the value for `XXX_confirmation` (i.e. don't provide a form field for it). When this attribute has a `nil` value, the validation will not be triggered. For further customizability, it is possible to suppress the default validations by passing `validations: false` as an argument. Add bcrypt (~> 3.1.7) to Gemfile to use [`has_secure_password`](classmethods#method-i-has_secure_password): ``` gem 'bcrypt', '~> 3.1.7' ``` Example using Active Record (which automatically includes [`ActiveModel::SecurePassword`](../securepassword)): ``` # Schema: User(name:string, password_digest:string, recovery_password_digest:string) class User < ActiveRecord::Base has_secure_password has_secure_password :recovery_password, validations: false end user = User.new(name: 'david', password: '', password_confirmation: 'nomatch') user.save # => false, password required user.password = 'mUc3m00RsqyRe' user.save # => false, confirmation doesn't match user.password_confirmation = 'mUc3m00RsqyRe' user.save # => true user.recovery_password = "42password" user.recovery_password_digest # => "$2a$04$iOfhwahFymCs5weB3BNH/uXkTG65HR.qpW.bNhEjFP3ftli3o5DQC" user.save # => true user.authenticate('notright') # => false user.authenticate('mUc3m00RsqyRe') # => user user.authenticate_recovery_password('42password') # => user User.find_by(name: 'david')&.authenticate('notright') # => false User.find_by(name: 'david')&.authenticate('mUc3m00RsqyRe') # => user ``` rails module ActiveModel::AttributeMethods::ClassMethods module ActiveModel::AttributeMethods::ClassMethods =================================================== alias\_attribute(new\_name, old\_name) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 209 def alias_attribute(new_name, old_name) self.attribute_aliases = attribute_aliases.merge(new_name.to_s => old_name.to_s) ActiveSupport::CodeGenerator.batch(self, __FILE__, __LINE__) do |code_generator| attribute_method_matchers.each do |matcher| method_name = matcher.method_name(new_name).to_s target_name = matcher.method_name(old_name).to_s parameters = matcher.parameters mangled_name = target_name unless NAME_COMPILABLE_REGEXP.match?(target_name) mangled_name = "__temp__#{target_name.unpack1("h*")}" end code_generator.define_cached_method(method_name, as: mangled_name, namespace: :alias_attribute) do |batch| body = if CALL_COMPILABLE_REGEXP.match?(target_name) "self.#{target_name}(#{parameters || ''})" else call_args = [":'#{target_name}'"] call_args << parameters if parameters "send(#{call_args.join(", ")})" end modifier = matcher.parameters == FORWARD_PARAMETERS ? "ruby2_keywords " : "" batch << "#{modifier}def #{mangled_name}(#{parameters || ''})" << body << "end" end end end end ``` Allows you to make aliases for attributes. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name attribute_method_suffix '_short?' define_attribute_methods :name alias_attribute :nickname, :name private def attribute_short?(attr) send(attr).length < 5 end end person = Person.new person.name = 'Bob' person.name # => "Bob" person.nickname # => "Bob" person.name_short? # => true person.nickname_short? # => true ``` attribute\_alias(name) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 248 def attribute_alias(name) attribute_aliases[name.to_s] end ``` Returns the original name for the alias `name` attribute\_alias?(new\_name) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 243 def attribute_alias?(new_name) attribute_aliases.key? new_name.to_s end ``` Is `new_name` an alias? attribute\_method\_affix(\*affixes) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 180 def attribute_method_affix(*affixes) self.attribute_method_matchers += affixes.map! { |affix| AttributeMethodMatcher.new(**affix) } undefine_attribute_methods end ``` Declares a method available for all attributes with the given prefix and suffix. Uses `method_missing` and `respond_to?` to rewrite the method. ``` #{prefix}#{attr}#{suffix}(*args, &block) ``` to ``` #{prefix}attribute#{suffix}(#{attr}, *args, &block) ``` An `#{prefix}attribute#{suffix}` instance method must exist and accept at least the `attr` argument. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name attribute_method_affix prefix: 'reset_', suffix: '_to_default!' define_attribute_methods :name private def reset_attribute_to_default!(attr) send("#{attr}=", 'Default Name') end end person = Person.new person.name # => 'Gem' person.reset_name_to_default! person.name # => 'Default Name' ``` attribute\_method\_prefix(\*prefixes, parameters: nil) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 109 def attribute_method_prefix(*prefixes, parameters: nil) self.attribute_method_matchers += prefixes.map! { |prefix| AttributeMethodMatcher.new(prefix: prefix, parameters: parameters) } undefine_attribute_methods end ``` Declares a method available for all attributes with the given prefix. Uses `method_missing` and `respond_to?` to rewrite the method. ``` #{prefix}#{attr}(*args, &block) ``` to ``` #{prefix}attribute(#{attr}, *args, &block) ``` An instance method `#{prefix}attribute` must exist and accept at least the `attr` argument. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name attribute_method_prefix 'clear_' define_attribute_methods :name private def clear_attribute(attr) send("#{attr}=", nil) end end person = Person.new person.name = 'Bob' person.name # => "Bob" person.clear_name person.name # => nil ``` attribute\_method\_suffix(\*suffixes, parameters: nil) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 144 def attribute_method_suffix(*suffixes, parameters: nil) self.attribute_method_matchers += suffixes.map! { |suffix| AttributeMethodMatcher.new(suffix: suffix, parameters: parameters) } undefine_attribute_methods end ``` Declares a method available for all attributes with the given suffix. Uses `method_missing` and `respond_to?` to rewrite the method. ``` #{attr}#{suffix}(*args, &block) ``` to ``` attribute#{suffix}(#{attr}, *args, &block) ``` An `attribute#{suffix}` instance method must exist and accept at least the `attr` argument. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name attribute_method_suffix '_short?' define_attribute_methods :name private def attribute_short?(attr) send(attr).length < 5 end end person = Person.new person.name = 'Bob' person.name # => "Bob" person.name_short? # => true ``` define\_attribute\_method(attr\_name, \_owner: generated\_attribute\_methods) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 311 def define_attribute_method(attr_name, _owner: generated_attribute_methods) ActiveSupport::CodeGenerator.batch(_owner, __FILE__, __LINE__) do |owner| attribute_method_matchers.each do |matcher| method_name = matcher.method_name(attr_name) unless instance_method_already_implemented?(method_name) generate_method = "define_method_#{matcher.target}" if respond_to?(generate_method, true) send(generate_method, attr_name.to_s, owner: owner) else define_proxy_call(owner, method_name, matcher.target, matcher.parameters, attr_name.to_s, namespace: :active_model) end end end attribute_method_matchers_cache.clear end end ``` Declares an attribute that should be prefixed and suffixed by `ActiveModel::AttributeMethods`. To use, pass an attribute name (as string or symbol). Be sure to declare `define_attribute_method` after you define any prefix, suffix or affix method, or they will not hook in. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name attribute_method_suffix '_short?' # Call to define_attribute_method must appear after the # attribute_method_prefix, attribute_method_suffix or # attribute_method_affix declarations. define_attribute_method :name private def attribute_short?(attr) send(attr).length < 5 end end person = Person.new person.name = 'Bob' person.name # => "Bob" person.name_short? # => true ``` define\_attribute\_methods(\*attr\_names) Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 276 def define_attribute_methods(*attr_names) ActiveSupport::CodeGenerator.batch(generated_attribute_methods, __FILE__, __LINE__) do |owner| attr_names.flatten.each { |attr_name| define_attribute_method(attr_name, _owner: owner) } end end ``` Declares the attributes that should be prefixed and suffixed by `ActiveModel::AttributeMethods`. To use, pass attribute names (as strings or symbols). Be sure to declare `define_attribute_methods` after you define any prefix, suffix or affix methods, or they will not hook in. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name, :age, :address attribute_method_prefix 'clear_' # Call to define_attribute_methods must appear after the # attribute_method_prefix, attribute_method_suffix or # attribute_method_affix declarations. define_attribute_methods :name, :age, :address private def clear_attribute(attr) send("#{attr}=", nil) end end ``` undefine\_attribute\_methods() Show source ``` # File activemodel/lib/active_model/attribute_methods.rb, line 353 def undefine_attribute_methods generated_attribute_methods.module_eval do undef_method(*instance_methods) end attribute_method_matchers_cache.clear end ``` Removes all the previously dynamically defined methods from the class. ``` class Person include ActiveModel::AttributeMethods attr_accessor :name attribute_method_suffix '_short?' define_attribute_method :name private def attribute_short?(attr) send(attr).length < 5 end end person = Person.new person.name = 'Bob' person.name_short? # => true Person.undefine_attribute_methods person.name_short? # => NoMethodError ``` rails module ActiveModel::Validations::Callbacks module ActiveModel::Validations::Callbacks =========================================== Included modules: [ActiveSupport::Callbacks](../../activesupport/callbacks) Active Model Validation Callbacks --------------------------------- Provides an interface for any class to have `before_validation` and `after_validation` callbacks. First, include [`ActiveModel::Validations::Callbacks`](callbacks) from the class you are creating: ``` class MyModel include ActiveModel::Validations::Callbacks before_validation :do_stuff_before_validation after_validation :do_stuff_after_validation end ``` Like other `before_*` callbacks if `before_validation` throws `:abort` then `valid?` will not be called. rails module ActiveModel::Validations::ClassMethods module ActiveModel::Validations::ClassMethods ============================================== attribute\_method?(attribute) Show source ``` # File activemodel/lib/active_model/validations.rb, line 270 def attribute_method?(attribute) method_defined?(attribute) end ``` Returns `true` if `attribute` is an attribute method, `false` otherwise. ``` class Person include ActiveModel::Validations attr_accessor :name end User.attribute_method?(:name) # => true User.attribute_method?(:age) # => false ``` clear\_validators!() Show source ``` # File activemodel/lib/active_model/validations.rb, line 234 def clear_validators! reset_callbacks(:validate) _validators.clear end ``` Clears all of the validators and validations. Note that this will clear anything that is being used to validate the model for both the `validates_with` and `validate` methods. It clears the validators that are created with an invocation of `validates_with` and the callbacks that are set by an invocation of `validate`. ``` class Person include ActiveModel::Validations validates_with MyValidator validates_with OtherValidator, on: :create validates_with StrictValidator, strict: true validate :cannot_be_robot def cannot_be_robot errors.add(:base, 'A person cannot be a robot') if person_is_robot end end Person.validators # => [ # #<MyValidator:0x007fbff403e808 @options={}>, # #<OtherValidator:0x007fbff403d930 @options={on: :create}>, # #<StrictValidator:0x007fbff3204a30 @options={strict:true}> # ] ``` If one runs `Person.clear_validators!` and then checks to see what validators this class has, you would obtain: ``` Person.validators # => [] ``` Also, the callback set by `validate :cannot_be_robot` will be erased so that: ``` Person._validate_callbacks.empty? # => true ``` validate(\*args, &block) Show source ``` # File activemodel/lib/active_model/validations.rb, line 152 def validate(*args, &block) options = args.extract_options! if args.all?(Symbol) options.each_key do |k| unless VALID_OPTIONS_FOR_VALIDATE.include?(k) raise ArgumentError.new("Unknown key: #{k.inspect}. Valid keys are: #{VALID_OPTIONS_FOR_VALIDATE.map(&:inspect).join(', ')}. Perhaps you meant to call `validates` instead of `validate`?") end end end if options.key?(:on) options = options.dup options[:on] = Array(options[:on]) options[:if] = [ ->(o) { !(options[:on] & Array(o.validation_context)).empty? }, *options[:if] ] end set_callback(:validate, *args, options, &block) end ``` Adds a validation method or block to the class. This is useful when overriding the `validate` instance method becomes too unwieldy and you're looking for more descriptive declaration of your validations. This can be done with a symbol pointing to a method: ``` class Comment include ActiveModel::Validations validate :must_be_friends def must_be_friends errors.add(:base, 'Must be friends to leave a comment') unless commenter.friend_of?(commentee) end end ``` With a block which is passed with the current record to be validated: ``` class Comment include ActiveModel::Validations validate do |comment| comment.must_be_friends end def must_be_friends errors.add(:base, 'Must be friends to leave a comment') unless commenter.friend_of?(commentee) end end ``` Or with a block where `self` points to the current record to be validated: ``` class Comment include ActiveModel::Validations validate do errors.add(:base, 'Must be friends to leave a comment') unless commenter.friend_of?(commentee) end end ``` Note that the return value of validation methods is not relevant. It's not possible to halt the validate callback chain. Options: * `:on` - Specifies the contexts where this validation is active. Runs in all validation contexts by default `nil`. You can pass a symbol or an array of symbols. (e.g. `on: :create` or `on: :custom_validation_context` or `on: [:create, :custom_validation_context]`) * `:if` - Specifies a method, proc or string to call to determine if the validation should occur (e.g. `if: :allow_validation`, or `if: Proc.new { |user| user.signup_step > 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. * `:unless` - Specifies a method, proc or string to call to determine if the validation should not occur (e.g. `unless: :skip_validation`, or `unless: Proc.new { |user| user.signup_step <= 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. NOTE: Calling `validate` multiple times on the same method will overwrite previous definitions. validates(\*attributes) Show source ``` # File activemodel/lib/active_model/validations/validates.rb, line 106 def validates(*attributes) defaults = attributes.extract_options!.dup validations = defaults.slice!(*_validates_default_keys) raise ArgumentError, "You need to supply at least one attribute" if attributes.empty? raise ArgumentError, "You need to supply at least one validation" if validations.empty? defaults[:attributes] = attributes validations.each do |key, options| key = "#{key.to_s.camelize}Validator" begin validator = key.include?("::") ? key.constantize : const_get(key) rescue NameError raise ArgumentError, "Unknown validator: '#{key}'" end next unless options validates_with(validator, defaults.merge(_parse_validates_options(options))) end end ``` This method is a shortcut to all default validators and any custom validator classes ending in 'Validator'. Note that Rails default validators can be overridden inside specific classes by creating custom validator classes in their place such as PresenceValidator. Examples of using the default rails validators: ``` validates :username, absence: true validates :terms, acceptance: true validates :password, confirmation: true validates :username, exclusion: { in: %w(admin superuser) } validates :email, format: { with: /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i, on: :create } validates :age, inclusion: { in: 0..9 } validates :first_name, length: { maximum: 30 } validates :age, numericality: true validates :username, presence: true ``` The power of the `validates` method comes when using custom validators and default validators in one call for a given attribute. ``` class EmailValidator < ActiveModel::EachValidator def validate_each(record, attribute, value) record.errors.add attribute, (options[:message] || "is not an email") unless /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i.match?(value) end end class Person include ActiveModel::Validations attr_accessor :name, :email validates :name, presence: true, length: { maximum: 100 } validates :email, presence: true, email: true end ``` [`Validator`](../validator) classes may also exist within the class being validated allowing custom modules of validators to be included as needed. ``` class Film include ActiveModel::Validations class TitleValidator < ActiveModel::EachValidator def validate_each(record, attribute, value) record.errors.add attribute, "must start with 'the'" unless /\Athe/i.match?(value) end end validates :name, title: true end ``` Additionally validator classes may be in another namespace and still used within any class. ``` validates :name, :'film/title' => true ``` The validators hash can also handle regular expressions, ranges, arrays and strings in shortcut form. ``` validates :email, format: /@/ validates :role, inclusion: %w(admin contributor) validates :password, length: 6..20 ``` When using shortcut form, ranges and arrays are passed to your validator's initializer as `options[:in]` while other types including regular expressions and strings are passed as `options[:with]`. There is also a list of options that could be used along with validators: * `:on` - Specifies the contexts where this validation is active. Runs in all validation contexts by default `nil`. You can pass a symbol or an array of symbols. (e.g. `on: :create` or `on: :custom_validation_context` or `on: [:create, :custom_validation_context]`) * `:if` - Specifies a method, proc or string to call to determine if the validation should occur (e.g. `if: :allow_validation`, or `if: Proc.new { |user| user.signup_step > 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. * `:unless` - Specifies a method, proc or string to call to determine if the validation should not occur (e.g. `unless: :skip_validation`, or `unless: Proc.new { |user| user.signup_step <= 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. * `:allow_nil` - Skip validation if the attribute is `nil`. * `:allow_blank` - Skip validation if the attribute is blank. * `:strict` - If the `:strict` option is set to true will raise [`ActiveModel::StrictValidationFailed`](../strictvalidationfailed) instead of adding the error. `:strict` option can also be set to any other exception. Example: ``` validates :password, presence: true, confirmation: true, if: :password_required? validates :token, length: 24, strict: TokenLengthException ``` Finally, the options `:if`, `:unless`, `:on`, `:allow_blank`, `:allow_nil`, `:strict` and `:message` can be given to one specific validator, as a hash: ``` validates :password, presence: { if: :password_required?, message: 'is forgotten.' }, confirmation: true ``` validates!(\*attributes) Show source ``` # File activemodel/lib/active_model/validations/validates.rb, line 148 def validates!(*attributes) options = attributes.extract_options! options[:strict] = true validates(*(attributes << options)) end ``` This method is used to define validations that cannot be corrected by end users and are considered exceptional. So each validator defined with bang or `:strict` option set to `true` will always raise `ActiveModel::StrictValidationFailed` instead of adding error when validation fails. See `validates` for more information about the validation itself. ``` class Person include ActiveModel::Validations attr_accessor :name validates! :name, presence: true end person = Person.new person.name = '' person.valid? # => ActiveModel::StrictValidationFailed: Name can't be blank ``` validates\_each(\*attr\_names, &block) Show source ``` # File activemodel/lib/active_model/validations.rb, line 85 def validates_each(*attr_names, &block) validates_with BlockValidator, _merge_attributes(attr_names), &block end ``` Validates each attribute against a block. ``` class Person include ActiveModel::Validations attr_accessor :first_name, :last_name validates_each :first_name, :last_name, allow_blank: true do |record, attr, value| record.errors.add attr, "starts with z." if value.start_with?("z") end end ``` Options: * `:on` - Specifies the contexts where this validation is active. Runs in all validation contexts by default `nil`. You can pass a symbol or an array of symbols. (e.g. `on: :create` or `on: :custom_validation_context` or `on: [:create, :custom_validation_context]`) * `:allow_nil` - Skip validation if attribute is `nil`. * `:allow_blank` - Skip validation if attribute is blank. * `:if` - Specifies a method, proc or string to call to determine if the validation should occur (e.g. `if: :allow_validation`, or `if: Proc.new { |user| user.signup_step > 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. * `:unless` - Specifies a method, proc or string to call to determine if the validation should not occur (e.g. `unless: :skip_validation`, or `unless: Proc.new { |user| user.signup_step <= 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. validates\_with(\*args, &block) Show source ``` # File activemodel/lib/active_model/validations/with.rb, line 81 def validates_with(*args, &block) options = args.extract_options! options[:class] = self args.each do |klass| validator = klass.new(options, &block) if validator.respond_to?(:attributes) && !validator.attributes.empty? validator.attributes.each do |attribute| _validators[attribute.to_sym] << validator end else _validators[nil] << validator end validate(validator, options) end end ``` Passes the record off to the class or classes specified and allows them to add errors based on more complex conditions. ``` class Person include ActiveModel::Validations validates_with MyValidator end class MyValidator < ActiveModel::Validator def validate(record) if some_complex_logic record.errors.add :base, 'This record is invalid' end end private def some_complex_logic # ... end end ``` You may also pass it multiple classes, like so: ``` class Person include ActiveModel::Validations validates_with MyValidator, MyOtherValidator, on: :create end ``` Configuration options: * `:on` - Specifies the contexts where this validation is active. Runs in all validation contexts by default `nil`. You can pass a symbol or an array of symbols. (e.g. `on: :create` or `on: :custom_validation_context` or `on: [:create, :custom_validation_context]`) * `:if` - Specifies a method, proc or string to call to determine if the validation should occur (e.g. `if: :allow_validation`, or `if: Proc.new { |user| user.signup_step > 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. * `:unless` - Specifies a method, proc or string to call to determine if the validation should not occur (e.g. `unless: :skip_validation`, or `unless: Proc.new { |user| user.signup_step <= 2 }`). The method, proc or string should return or evaluate to a `true` or `false` value. * `:strict` - Specifies whether validation should be strict. See `ActiveModel::Validations#validates!` for more information. If you pass any additional configuration options, they will be passed to the class and available as `options`: ``` class Person include ActiveModel::Validations validates_with MyValidator, my_custom_key: 'my custom value' end class MyValidator < ActiveModel::Validator def validate(record) options[:my_custom_key] # => "my custom value" end end ``` validators() Show source ``` # File activemodel/lib/active_model/validations.rb, line 192 def validators _validators.values.flatten.uniq end ``` List all validators that are being used to validate the model using `validates_with` method. ``` class Person include ActiveModel::Validations validates_with MyValidator validates_with OtherValidator, on: :create validates_with StrictValidator, strict: true end Person.validators # => [ # #<MyValidator:0x007fbff403e808 @options={}>, # #<OtherValidator:0x007fbff403d930 @options={on: :create}>, # #<StrictValidator:0x007fbff3204a30 @options={strict:true}> # ] ``` validators\_on(\*attributes) Show source ``` # File activemodel/lib/active_model/validations.rb, line 254 def validators_on(*attributes) attributes.flat_map do |attribute| _validators[attribute.to_sym] end end ``` List all validators that are being used to validate a specific attribute. ``` class Person include ActiveModel::Validations attr_accessor :name , :age validates_presence_of :name validates_inclusion_of :age, in: 0..99 end Person.validators_on(:name) # => [ # #<ActiveModel::Validations::PresenceValidator:0x007fe604914e60 @attributes=[:name], @options={}>, # ] ```
programming_docs
rails module ActiveModel::Validations::HelperMethods module ActiveModel::Validations::HelperMethods =============================================== validates\_absence\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/absence.rb, line 28 def validates_absence_of(*attr_names) validates_with AbsenceValidator, _merge_attributes(attr_names) end ``` Validates that the specified attributes are blank (as defined by [`Object#present?`](../../object#method-i-present-3F)). Happens by default on save. ``` class Person < ActiveRecord::Base validates_absence_of :first_name end ``` The first\_name attribute must be in the object and it must be blank. Configuration options: * `:message` - A custom error message (default is: “must be blank”). There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information validates\_acceptance\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/acceptance.rb, line 108 def validates_acceptance_of(*attr_names) validates_with AcceptanceValidator, _merge_attributes(attr_names) end ``` Encapsulates the pattern of wanting to validate the acceptance of a terms of service check box (or similar agreement). ``` class Person < ActiveRecord::Base validates_acceptance_of :terms_of_service validates_acceptance_of :eula, message: 'must be abided' end ``` If the database column does not exist, the `terms_of_service` attribute is entirely virtual. This check is performed only if `terms_of_service` is not `nil` and by default on save. Configuration options: * `:message` - A custom error message (default is: “must be accepted”). * `:accept` - Specifies a value that is considered accepted. Also accepts an array of possible values. The default value is an array [“1”, true], which makes it easy to relate to an HTML checkbox. This should be set to, or include, `true` if you are validating a database column, since the attribute is typecast from “1” to `true` before validation. There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information. validates\_comparison\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/comparison.rb, line 77 def validates_comparison_of(*attr_names) validates_with ComparisonValidator, _merge_attributes(attr_names) end ``` Validates the value of a specified attribute fulfills all defined comparisons with another value, proc, or attribute. ``` class Person < ActiveRecord::Base validates_comparison_of :value, greater_than: 'the sum of its parts' end ``` Configuration options: * `:message` - A custom error message (default is: “failed comparison”). * `:greater_than` - Specifies the value must be greater than the supplied value. * `:greater_than_or_equal_to` - Specifies the value must be greater than or equal to the supplied value. * `:equal_to` - Specifies the value must be equal to the supplied value. * `:less_than` - Specifies the value must be less than the supplied value. * `:less_than_or_equal_to` - Specifies the value must be less than or equal to the supplied value. * `:other_than` - Specifies the value must not be equal to the supplied value. There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict` . See `ActiveModel::Validations#validates` for more information The validator requires at least one of the following checks to be supplied. Each will accept a proc, value, or a symbol which corresponds to a method: * `:greater_than` * `:greater_than_or_equal_to` * `:equal_to` * `:less_than` * `:less_than_or_equal_to` * `:other_than` For example: ``` class Person < ActiveRecord::Base validates_comparison_of :birth_date, less_than_or_equal_to: -> { Date.today } validates_comparison_of :preferred_name, other_than: :given_name, allow_nil: true end ``` validates\_confirmation\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/confirmation.rb, line 75 def validates_confirmation_of(*attr_names) validates_with ConfirmationValidator, _merge_attributes(attr_names) end ``` Encapsulates the pattern of wanting to validate a password or email address field with a confirmation. ``` Model: class Person < ActiveRecord::Base validates_confirmation_of :user_name, :password validates_confirmation_of :email_address, message: 'should match confirmation' end View: <%= password_field "person", "password" %> <%= password_field "person", "password_confirmation" %> ``` The added `password_confirmation` attribute is virtual; it exists only as an in-memory attribute for validating the password. To achieve this, the validation adds accessors to the model for the confirmation attribute. NOTE: This check is performed only if `password_confirmation` is not `nil`. To require confirmation, make sure to add a presence check for the confirmation attribute: ``` validates_presence_of :password_confirmation, if: :password_changed? ``` Configuration options: * `:message` - A custom error message (default is: “doesn't match `%{translated_attribute_name}`”). * `:case_sensitive` - Looks for an exact match. Ignored by non-text columns (`true` by default). There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information validates\_exclusion\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/exclusion.rb, line 44 def validates_exclusion_of(*attr_names) validates_with ExclusionValidator, _merge_attributes(attr_names) end ``` Validates that the value of the specified attribute is not in a particular enumerable object. ``` class Person < ActiveRecord::Base validates_exclusion_of :username, in: %w( admin superuser ), message: "You don't belong here" validates_exclusion_of :age, in: 30..60, message: 'This site is only for under 30 and over 60' validates_exclusion_of :format, in: %w( mov avi ), message: "extension %{value} is not allowed" validates_exclusion_of :password, in: ->(person) { [person.username, person.first_name] }, message: 'should not be the same as your username or first name' validates_exclusion_of :karma, in: :reserved_karmas end ``` Configuration options: * `:in` - An enumerable object of items that the value shouldn't be part of. This can be supplied as a proc, lambda or symbol which returns an enumerable. If the enumerable is a numerical, time or datetime range the test is performed with `Range#cover?`, otherwise with `include?`. When using a proc or lambda the instance under validation is passed as an argument. * `:within` - A synonym(or alias) for `:in` `Range#cover?`, otherwise with `include?`. * `:message` - Specifies a custom error message (default is: “is reserved”). There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information validates\_format\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/format.rb, line 108 def validates_format_of(*attr_names) validates_with FormatValidator, _merge_attributes(attr_names) end ``` Validates whether the value of the specified attribute is of the correct form, going by the regular expression provided. You can require that the attribute matches the regular expression: ``` class Person < ActiveRecord::Base validates_format_of :email, with: /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i, on: :create end ``` Alternatively, you can require that the specified attribute does *not* match the regular expression: ``` class Person < ActiveRecord::Base validates_format_of :email, without: /NOSPAM/ end ``` You can also provide a proc or lambda which will determine the regular expression that will be used to validate the attribute. ``` class Person < ActiveRecord::Base # Admin can have number as a first letter in their screen name validates_format_of :screen_name, with: ->(person) { person.admin? ? /\A[a-z0-9][a-z0-9_\-]*\z/i : /\A[a-z][a-z0-9_\-]*\z/i } end ``` Note: use `\A` and `\z` to match the start and end of the string, `^` and `$` match the start/end of a line. Due to frequent misuse of `^` and `$`, you need to pass the `multiline: true` option in case you use any of these two anchors in the provided regular expression. In most cases, you should be using `\A` and `\z`. You must pass either `:with` or `:without` as an option. In addition, both must be a regular expression or a proc or lambda, or else an exception will be raised. Configuration options: * `:message` - A custom error message (default is: “is invalid”). * `:with` - Regular expression that if the attribute matches will result in a successful validation. This can be provided as a proc or lambda returning regular expression which will be called at runtime. * `:without` - Regular expression that if the attribute does not match will result in a successful validation. This can be provided as a proc or lambda returning regular expression which will be called at runtime. * `:multiline` - Set to true if your regular expression contains anchors that match the beginning or end of lines as opposed to the beginning or end of the string. These anchors are `^` and `$`. There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information validates\_inclusion\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/inclusion.rb, line 42 def validates_inclusion_of(*attr_names) validates_with InclusionValidator, _merge_attributes(attr_names) end ``` Validates whether the value of the specified attribute is available in a particular enumerable object. ``` class Person < ActiveRecord::Base validates_inclusion_of :role, in: %w( admin contributor ) validates_inclusion_of :age, in: 0..99 validates_inclusion_of :format, in: %w( jpg gif png ), message: "extension %{value} is not included in the list" validates_inclusion_of :states, in: ->(person) { STATES[person.country] } validates_inclusion_of :karma, in: :available_karmas end ``` Configuration options: * `:in` - An enumerable object of available items. This can be supplied as a proc, lambda or symbol which returns an enumerable. If the enumerable is a numerical, time or datetime range the test is performed with `Range#cover?`, otherwise with `include?`. When using a proc or lambda the instance under validation is passed as an argument. * `:within` - A synonym(or alias) for `:in` * `:message` - Specifies a custom error message (default is: “is not included in the list”). There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information validates\_length\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/length.rb, line 122 def validates_length_of(*attr_names) validates_with LengthValidator, _merge_attributes(attr_names) end ``` Validates that the specified attributes match the length restrictions supplied. Only one constraint option can be used at a time apart from `:minimum` and `:maximum` that can be combined together: ``` class Person < ActiveRecord::Base validates_length_of :first_name, maximum: 30 validates_length_of :last_name, maximum: 30, message: "less than 30 if you don't mind" validates_length_of :fax, in: 7..32, allow_nil: true validates_length_of :phone, in: 7..32, allow_blank: true validates_length_of :user_name, within: 6..20, too_long: 'pick a shorter name', too_short: 'pick a longer name' validates_length_of :zip_code, minimum: 5, too_short: 'please enter at least 5 characters' validates_length_of :smurf_leader, is: 4, message: "papa is spelled with 4 characters... don't play me." validates_length_of :words_in_essay, minimum: 100, too_short: 'Your essay must be at least 100 words.' private def words_in_essay essay.scan(/\w+/) end end ``` Constraint options: * `:minimum` - The minimum size of the attribute. * `:maximum` - The maximum size of the attribute. Allows `nil` by default if not used with `:minimum`. * `:is` - The exact size of the attribute. * `:within` - A range specifying the minimum and maximum size of the attribute. * `:in` - A synonym (or alias) for `:within`. Other options: * `:allow_nil` - Attribute may be `nil`; skip validation. * `:allow_blank` - Attribute may be blank; skip validation. * `:too_long` - The error message if the attribute goes over the maximum (default is: “is too long (maximum is %{count} characters)”). * `:too_short` - The error message if the attribute goes under the minimum (default is: “is too short (minimum is %{count} characters)”). * `:wrong_length` - The error message if using the `:is` method and the attribute is the wrong size (default is: “is the wrong length (should be %{count} characters)”). * `:message` - The error message to use for a `:minimum`, `:maximum`, or `:is` violation. An alias of the appropriate `too_long`/`too_short`/`wrong_length` message. There is also a list of default options supported by every validator: `:if`, `:unless`, `:on` and `:strict`. See `ActiveModel::Validations#validates` for more information Also aliased as: [validates\_size\_of](helpermethods#method-i-validates_size_of) validates\_numericality\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/numericality.rb, line 205 def validates_numericality_of(*attr_names) validates_with NumericalityValidator, _merge_attributes(attr_names) end ``` Validates whether the value of the specified attribute is numeric by trying to convert it to a float with [`Kernel`](../../kernel).Float (if `only_integer` is `false`) or applying it to the regular expression `/\A[+\-]?\d+\z/` (if `only_integer` is set to `true`). Precision of [`Kernel`](../../kernel).Float values are guaranteed up to 15 digits. ``` class Person < ActiveRecord::Base validates_numericality_of :value, on: :create end ``` Configuration options: * `:message` - A custom error message (default is: “is not a number”). * `:only_integer` - Specifies whether the value has to be an integer (default is `false`). * `:allow_nil` - Skip validation if attribute is `nil` (default is `false`). Notice that for [`Integer`](../../integer) and `Float` columns empty strings are converted to `nil`. * `:greater_than` - Specifies the value must be greater than the supplied value. * `:greater_than_or_equal_to` - Specifies the value must be greater than or equal the supplied value. * `:equal_to` - Specifies the value must be equal to the supplied value. * `:less_than` - Specifies the value must be less than the supplied value. * `:less_than_or_equal_to` - Specifies the value must be less than or equal the supplied value. * `:other_than` - Specifies the value must be other than the supplied value. * `:odd` - Specifies the value must be an odd number. * `:even` - Specifies the value must be an even number. * `:in` - Check that the value is within a range. There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict` . See `ActiveModel::Validations#validates` for more information The following checks can also be supplied with a proc or a symbol which corresponds to a method: * `:greater_than` * `:greater_than_or_equal_to` * `:equal_to` * `:less_than` * `:less_than_or_equal_to` * `:only_integer` * `:other_than` For example: ``` class Person < ActiveRecord::Base validates_numericality_of :width, less_than: ->(person) { person.height } validates_numericality_of :width, greater_than: :minimum_weight end ``` validates\_presence\_of(\*attr\_names) Show source ``` # File activemodel/lib/active_model/validations/presence.rb, line 34 def validates_presence_of(*attr_names) validates_with PresenceValidator, _merge_attributes(attr_names) end ``` Validates that the specified attributes are not blank (as defined by [`Object#blank?`](../../object#method-i-blank-3F)). Happens by default on save. ``` class Person < ActiveRecord::Base validates_presence_of :first_name end ``` The first\_name attribute must be in the object and it cannot be blank. If you want to validate the presence of a boolean field (where the real values are `true` and `false`), you will want to use `validates_inclusion_of :field_name, in: [true, false]`. This is due to the way [`Object#blank?`](../../object#method-i-blank-3F) handles boolean values: `false.blank? # => true`. Configuration options: * `:message` - A custom error message (default is: “can't be blank”). There is also a list of default options supported by every validator: `:if`, `:unless`, `:on`, `:allow_nil`, `:allow_blank`, and `:strict`. See `ActiveModel::Validations#validates` for more information validates\_size\_of(\*attr\_names) Alias for: [validates\_length\_of](helpermethods#method-i-validates_length_of) rails module ActiveModel::Validations::Callbacks::ClassMethods module ActiveModel::Validations::Callbacks::ClassMethods ========================================================= after\_validation(\*args, &block) Show source ``` # File activemodel/lib/active_model/validations/callbacks.rb, line 90 def after_validation(*args, &block) options = args.extract_options! options = options.dup options[:prepend] = true set_options_for_callback(options) set_callback(:validation, :after, *args, options, &block) end ``` Defines a callback that will get called right after validation. ``` class Person include ActiveModel::Validations include ActiveModel::Validations::Callbacks attr_accessor :name, :status validates_presence_of :name after_validation :set_status private def set_status self.status = errors.empty? end end person = Person.new person.name = '' person.valid? # => false person.status # => false person.name = 'bob' person.valid? # => true person.status # => true ``` before\_validation(\*args, &block) Show source ``` # File activemodel/lib/active_model/validations/callbacks.rb, line 56 def before_validation(*args, &block) options = args.extract_options! set_options_for_callback(options) set_callback(:validation, :before, *args, options, &block) end ``` Defines a callback that will get called right before validation. ``` class Person include ActiveModel::Validations include ActiveModel::Validations::Callbacks attr_accessor :name validates_length_of :name, maximum: 6 before_validation :remove_whitespaces private def remove_whitespaces name.strip! end end person = Person.new person.name = ' bob ' person.valid? # => true person.name # => "bob" ``` rails module ActiveModel::Attributes::ClassMethods module ActiveModel::Attributes::ClassMethods ============================================= attribute(name, cast\_type = nil, default: NO\_DEFAULT\_PROVIDED, \*\*options) Show source ``` # File activemodel/lib/active_model/attributes.rb, line 19 def attribute(name, cast_type = nil, default: NO_DEFAULT_PROVIDED, **options) name = name.to_s cast_type = Type.lookup(cast_type, **options) if Symbol === cast_type cast_type ||= attribute_types[name] self.attribute_types = attribute_types.merge(name => cast_type) define_default_attribute(name, default, cast_type) define_attribute_method(name) end ``` attribute\_names() Show source ``` # File activemodel/lib/active_model/attributes.rb, line 41 def attribute_names attribute_types.keys end ``` Returns an array of attribute names as strings ``` class Person include ActiveModel::Attributes attribute :name, :string attribute :age, :integer end Person.attribute_names # => ["name", "age"] ```
programming_docs
rails module ActiveModel::Serializers::JSON module ActiveModel::Serializers::JSON ====================================== Included modules: [ActiveModel::Serialization](../serialization) Active Model JSON Serializer ---------------------------- as\_json(options = nil) Show source ``` # File activemodel/lib/active_model/serializers/json.rb, line 96 def as_json(options = nil) root = if options && options.key?(:root) options[:root] else include_root_in_json end hash = serializable_hash(options).as_json if root root = model_name.element if root == true { root => hash } else hash end end ``` Returns a hash representing the model. Some configuration can be passed through `options`. The option `include_root_in_json` controls the top-level behavior of `as_json`. If `true`, `as_json` will emit a single root node named after the object's type. The default value for `include_root_in_json` option is `false`. ``` user = User.find(1) user.as_json # => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:133.000Z", "awesome" => true} ActiveRecord::Base.include_root_in_json = true user.as_json # => { "user" => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true } } ``` This behavior can also be achieved by setting the `:root` option to `true` as in: ``` user = User.find(1) user.as_json(root: true) # => { "user" => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true } } ``` If you prefer, `:root` may also be set to a custom string key instead as in: ``` user = User.find(1) user.as_json(root: "author") # => { "author" => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true } } ``` Without any `options`, the returned [`Hash`](../../hash) will include all the model's attributes. ``` user = User.find(1) user.as_json # => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true} ``` The `:only` and `:except` options can be used to limit the attributes included, and work similar to the `attributes` method. ``` user.as_json(only: [:id, :name]) # => { "id" => 1, "name" => "Konata Izumi" } user.as_json(except: [:id, :created_at, :age]) # => { "name" => "Konata Izumi", "awesome" => true } ``` To include the result of some method calls on the model use `:methods`: ``` user.as_json(methods: :permalink) # => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true, # "permalink" => "1-konata-izumi" } ``` To include associations use `:include`: ``` user.as_json(include: :posts) # => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true, # "posts" => [ { "id" => 1, "author_id" => 1, "title" => "Welcome to the weblog" }, # { "id" => 2, "author_id" => 1, "title" => "So I was thinking" } ] } ``` Second level and higher order associations work as well: ``` user.as_json(include: { posts: { include: { comments: { only: :body } }, only: :title } }) # => { "id" => 1, "name" => "Konata Izumi", "age" => 16, # "created_at" => "2006-08-01T17:27:13.000Z", "awesome" => true, # "posts" => [ { "comments" => [ { "body" => "1st post!" }, { "body" => "Second!" } ], # "title" => "Welcome to the weblog" }, # { "comments" => [ { "body" => "Don't think too hard" } ], # "title" => "So I was thinking" } ] } ``` from\_json(json, include\_root = include\_root\_in\_json) Show source ``` # File activemodel/lib/active_model/serializers/json.rb, line 146 def from_json(json, include_root = include_root_in_json) hash = ActiveSupport::JSON.decode(json) hash = hash.values.first if include_root self.attributes = hash self end ``` Sets the model `attributes` from a [`JSON`](json) string. Returns `self`. ``` class Person include ActiveModel::Serializers::JSON attr_accessor :name, :age, :awesome def attributes=(hash) hash.each do |key, value| send("#{key}=", value) end end def attributes instance_values end end json = { name: 'bob', age: 22, awesome:true }.to_json person = Person.new person.from_json(json) # => #<Person:0x007fec5e7a0088 @age=22, @awesome=true, @name="bob"> person.name # => "bob" person.age # => 22 person.awesome # => true ``` The default value for `include_root` is `false`. You can change it to `true` if the given [`JSON`](json) string includes a single root node. ``` json = { person: { name: 'bob', age: 22, awesome:true } }.to_json person = Person.new person.from_json(json, true) # => #<Person:0x007fec5e7a0088 @age=22, @awesome=true, @name="bob"> person.name # => "bob" person.age # => 22 person.awesome # => true ``` rails class ActiveRecord::Type::Value class ActiveRecord::Type::Value ================================ Parent: [Object](../../object) limit[R] precision[R] scale[R] new(precision: nil, limit: nil, scale: nil) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 8 def initialize(precision: nil, limit: nil, scale: nil) @precision = precision @scale = scale @limit = limit end ``` ==(other) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 109 def ==(other) self.class == other.class && precision == other.precision && scale == other.scale && limit == other.limit end ``` Also aliased as: [eql?](value#method-i-eql-3F) assert\_valid\_value(\_) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 121 def assert_valid_value(_) end ``` cast(value) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 45 def cast(value) cast_value(value) unless value.nil? end ``` [`Type`](../type) casts a value from user input (e.g. from a setter). This value may be a string from the form builder, or a ruby object passed to a setter. There is currently no way to differentiate between which source it came from. The return value of this method will be returned from [`ActiveRecord::AttributeMethods::Read#read_attribute`](../../activerecord/attributemethods/read#method-i-read_attribute). See also: [`Value#cast_value`](value#method-i-cast_value). `value` The raw input, as provided to the attribute setter. changed?(old\_value, new\_value, \_new\_value\_before\_type\_cast) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 72 def changed?(old_value, new_value, _new_value_before_type_cast) old_value != new_value end ``` Determines whether a value has changed for dirty checking. `old_value` and `new_value` will always be type-cast. Types should not need to override this method. changed\_in\_place?(raw\_old\_value, new\_value) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 93 def changed_in_place?(raw_old_value, new_value) false end ``` Determines whether the mutable value has been modified since it was read. Returns `false` by default. If your type returns an object which could be mutated, you should override this method. You will need to either: * pass `new_value` to [`Value#serialize`](value#method-i-serialize) and compare it to `raw_old_value` or * pass `raw_old_value` to [`Value#deserialize`](value#method-i-deserialize) and compare it to `new_value` `raw_old_value` The original value, before being passed to `deserialize`. `new_value` The current value, after type casting. deserialize(value) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 31 def deserialize(value) cast(value) end ``` Converts a value from database input to the appropriate ruby type. The return value of this method will be returned from [`ActiveRecord::AttributeMethods::Read#read_attribute`](../../activerecord/attributemethods/read#method-i-read_attribute). The default implementation just calls [`Value#cast`](value#method-i-cast). `value` The raw input, as provided from the database. eql?(other) Alias for: [==](value#method-i-3D-3D) hash() Show source ``` # File activemodel/lib/active_model/type/value.rb, line 117 def hash [self.class, precision, scale, limit].hash end ``` serializable?(value) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 18 def serializable?(value) true end ``` Returns true if this type can convert `value` to a type that is usable by the database. For example a boolean type can return `true` if the value parameter is a Ruby boolean, but may return `false` if the value parameter is some other object. serialize(value) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 53 def serialize(value) value end ``` Casts a value from the ruby type to a type that the database knows how to understand. The returned value from this method should be a `String`, `Numeric`, `Date`, `Time`, `Symbol`, `true`, `false`, or `nil`. cast\_value(value) Show source ``` # File activemodel/lib/active_model/type/value.rb, line 128 def cast_value(value) # :doc: value end ``` Convenience method for types which do not need separate type casting behavior for user and database inputs. Called by [`Value#cast`](value#method-i-cast) for values except `nil`. rails class ActiveRecord::Type::Boolean class ActiveRecord::Type::Boolean ================================== Parent: Value Active Model Type Boolean ------------------------- A class that behaves like a boolean type, including rules for coercion of user input. ### Coercion Values set from user input will first be coerced into the appropriate ruby type. Coercion behavior is roughly mapped to Ruby's boolean semantics. * “false”, “f” , “0”, `0` or any other value in `FALSE_VALUES` will be coerced to `false` * Empty strings are coerced to `nil` * All other values will be coerced to `true` FALSE\_VALUES rails module ActiveJob::TestHelper module ActiveJob::TestHelper ============================= Included modules: [ActiveSupport::Testing::Assertions](../activesupport/testing/assertions) Provides helper methods for testing Active Job assert\_enqueued\_jobs(number, only: nil, except: nil, queue: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 123 def assert_enqueued_jobs(number, only: nil, except: nil, queue: nil, &block) if block_given? original_jobs = enqueued_jobs_with(only: only, except: except, queue: queue) _assert_nothing_raised_or_warn("assert_enqueued_jobs", &block) new_jobs = enqueued_jobs_with(only: only, except: except, queue: queue) actual_count = (new_jobs - original_jobs).count else actual_count = enqueued_jobs_with(only: only, except: except, queue: queue).count end assert_equal number, actual_count, "#{number} jobs expected, but #{actual_count} were enqueued" end ``` Asserts that the number of enqueued jobs matches the given number. ``` def test_jobs assert_enqueued_jobs 0 HelloJob.perform_later('david') assert_enqueued_jobs 1 HelloJob.perform_later('abdelkader') assert_enqueued_jobs 2 end ``` If a block is passed, asserts that the block will cause the specified number of jobs to be enqueued. ``` def test_jobs_again assert_enqueued_jobs 1 do HelloJob.perform_later('cristian') end assert_enqueued_jobs 2 do HelloJob.perform_later('aaron') HelloJob.perform_later('rafael') end end ``` Asserts the number of times a specific job was enqueued by passing `:only` option. ``` def test_logging_job assert_enqueued_jobs 1, only: LoggingJob do LoggingJob.perform_later HelloJob.perform_later('jeremy') end end ``` Asserts the number of times a job except specific class was enqueued by passing `:except` option. ``` def test_logging_job assert_enqueued_jobs 1, except: HelloJob do LoggingJob.perform_later HelloJob.perform_later('jeremy') end end ``` `:only` and `:except` options accept [`Class`](../class), [`Array`](../array) of [`Class`](../class) or Proc. When passed a Proc, a hash containing the job's class and it's argument are passed as argument. Asserts the number of times a job is enqueued to a specific queue by passing `:queue` option. ``` def test_logging_job assert_enqueued_jobs 2, queue: 'default' do LoggingJob.perform_later HelloJob.perform_later('elfassy') end end ``` assert\_enqueued\_with(job: nil, args: nil, at: nil, queue: nil, priority: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 392 def assert_enqueued_with(job: nil, args: nil, at: nil, queue: nil, priority: nil, &block) expected = { job: job, args: args, at: at, queue: queue, priority: priority }.compact expected_args = prepare_args_for_assertion(expected) potential_matches = [] if block_given? original_enqueued_jobs = enqueued_jobs.dup _assert_nothing_raised_or_warn("assert_enqueued_with", &block) jobs = enqueued_jobs - original_enqueued_jobs else jobs = enqueued_jobs end matching_job = jobs.find do |enqueued_job| deserialized_job = deserialize_args_for_assertion(enqueued_job) potential_matches << deserialized_job expected_args.all? do |key, value| if value.respond_to?(:call) value.call(deserialized_job[key]) else value == deserialized_job[key] end end end matching_class = potential_matches.select do |enqueued_job| enqueued_job["job_class"] == job.to_s end message = +"No enqueued job found with #{expected}" if potential_matches.empty? message << "\n\nNo jobs were enqueued" elsif matching_class.empty? message << "\n\nNo jobs of class #{expected[:job]} were enqueued, job classes enqueued: " message << potential_matches.map { |job| job["job_class"] }.join(", ") else message << "\n\nPotential matches: #{matching_class.join("\n")}" end assert matching_job, message instantiate_job(matching_job) end ``` Asserts that the job has been enqueued with the given arguments. ``` def test_assert_enqueued_with MyJob.perform_later(1,2,3) assert_enqueued_with(job: MyJob, args: [1,2,3]) MyJob.set(wait_until: Date.tomorrow.noon, queue: "my_queue").perform_later assert_enqueued_with(at: Date.tomorrow.noon, queue: "my_queue") end ``` The given arguments may also be specified as matcher procs that return a boolean value indicating whether a job's attribute meets certain criteria. For example, a proc can be used to match a range of times: ``` def test_assert_enqueued_with at_matcher = ->(job_at) { (Date.yesterday..Date.tomorrow).cover?(job_at) } MyJob.set(wait_until: Date.today.noon).perform_later assert_enqueued_with(job: MyJob, at: at_matcher) end ``` A proc can also be used to match a subset of a job's args: ``` def test_assert_enqueued_with args_matcher = ->(job_args) { job_args[0].key?(:foo) } MyJob.perform_later(foo: "bar", other_arg: "No need to check in the test") assert_enqueued_with(job: MyJob, args: args_matcher) end ``` If a block is passed, asserts that the block will cause the job to be enqueued with the given arguments. ``` def test_assert_enqueued_with assert_enqueued_with(job: MyJob, args: [1,2,3]) do MyJob.perform_later(1,2,3) end assert_enqueued_with(job: MyJob, at: Date.tomorrow.noon) do MyJob.set(wait_until: Date.tomorrow.noon).perform_later end end ``` assert\_no\_enqueued\_jobs(only: nil, except: nil, queue: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 185 def assert_no_enqueued_jobs(only: nil, except: nil, queue: nil, &block) assert_enqueued_jobs 0, only: only, except: except, queue: queue, &block end ``` Asserts that no jobs have been enqueued. ``` def test_jobs assert_no_enqueued_jobs HelloJob.perform_later('jeremy') assert_enqueued_jobs 1 end ``` If a block is passed, asserts that the block will not cause any job to be enqueued. ``` def test_jobs_again assert_no_enqueued_jobs do # No job should be enqueued from this block end end ``` Asserts that no jobs of a specific kind are enqueued by passing `:only` option. ``` def test_no_logging assert_no_enqueued_jobs only: LoggingJob do HelloJob.perform_later('jeremy') end end ``` Asserts that no jobs except specific class are enqueued by passing `:except` option. ``` def test_no_logging assert_no_enqueued_jobs except: HelloJob do HelloJob.perform_later('jeremy') end end ``` `:only` and `:except` options accept [`Class`](../class), [`Array`](../array) of [`Class`](../class) or Proc. When passed a Proc, a hash containing the job's class and it's argument are passed as argument. Asserts that no jobs are enqueued to a specific queue by passing `:queue` option ``` def test_no_logging assert_no_enqueued_jobs queue: 'default' do LoggingJob.set(queue: :some_queue).perform_later end end ``` Note: This assertion is simply a shortcut for: ``` assert_enqueued_jobs 0, &block ``` assert\_no\_performed\_jobs(only: nil, except: nil, queue: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 343 def assert_no_performed_jobs(only: nil, except: nil, queue: nil, &block) assert_performed_jobs 0, only: only, except: except, queue: queue, &block end ``` Asserts that no jobs have been performed. ``` def test_jobs assert_no_performed_jobs perform_enqueued_jobs do HelloJob.perform_later('matthew') assert_performed_jobs 1 end end ``` If a block is passed, asserts that the block will not cause any job to be performed. ``` def test_jobs_again assert_no_performed_jobs do # No job should be performed from this block end end ``` The block form supports filtering. If the `:only` option is specified, then only the listed job(s) will not be performed. ``` def test_no_logging assert_no_performed_jobs only: LoggingJob do HelloJob.perform_later('jeremy') end end ``` Also if the `:except` option is specified, then the job(s) except specific class will not be performed. ``` def test_no_logging assert_no_performed_jobs except: HelloJob do HelloJob.perform_later('jeremy') end end ``` `:only` and `:except` options accept [`Class`](../class), [`Array`](../array) of [`Class`](../class) or Proc. When passed a Proc, an instance of the job will be passed as argument. If the `:queue` option is specified, then only the job(s) enqueued to a specific queue will not be performed. ``` def test_assert_no_performed_jobs_with_queue_option assert_no_performed_jobs queue: :some_queue do HelloJob.set(queue: :other_queue).perform_later("jeremy") end end ``` Note: This assertion is simply a shortcut for: ``` assert_performed_jobs 0, &block ``` assert\_performed\_jobs(number, only: nil, except: nil, queue: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 275 def assert_performed_jobs(number, only: nil, except: nil, queue: nil, &block) if block_given? original_count = performed_jobs.size perform_enqueued_jobs(only: only, except: except, queue: queue, &block) new_count = performed_jobs.size performed_jobs_size = new_count - original_count else performed_jobs_size = performed_jobs_with(only: only, except: except, queue: queue).count end assert_equal number, performed_jobs_size, "#{number} jobs expected, but #{performed_jobs_size} were performed" end ``` Asserts that the number of performed jobs matches the given number. If no block is passed, `perform_enqueued_jobs` must be called around or after the job call. ``` def test_jobs assert_performed_jobs 0 perform_enqueued_jobs do HelloJob.perform_later('xavier') end assert_performed_jobs 1 HelloJob.perform_later('yves') perform_enqueued_jobs assert_performed_jobs 2 end ``` If a block is passed, asserts that the block will cause the specified number of jobs to be performed. ``` def test_jobs_again assert_performed_jobs 1 do HelloJob.perform_later('robin') end assert_performed_jobs 2 do HelloJob.perform_later('carlos') HelloJob.perform_later('sean') end end ``` This method also supports filtering. If the `:only` option is specified, then only the listed job(s) will be performed. ``` def test_hello_job assert_performed_jobs 1, only: HelloJob do HelloJob.perform_later('jeremy') LoggingJob.perform_later end end ``` Also if the `:except` option is specified, then the job(s) except specific class will be performed. ``` def test_hello_job assert_performed_jobs 1, except: LoggingJob do HelloJob.perform_later('jeremy') LoggingJob.perform_later end end ``` An array may also be specified, to support testing multiple jobs. ``` def test_hello_and_logging_jobs assert_nothing_raised do assert_performed_jobs 2, only: [HelloJob, LoggingJob] do HelloJob.perform_later('jeremy') LoggingJob.perform_later('stewie') RescueJob.perform_later('david') end end end ``` A proc may also be specified. When passed a Proc, the job's instance will be passed as argument. ``` def test_hello_and_logging_jobs assert_nothing_raised do assert_performed_jobs(1, only: ->(job) { job.is_a?(HelloJob) }) do HelloJob.perform_later('jeremy') LoggingJob.perform_later('stewie') RescueJob.perform_later('david') end end end ``` If the `:queue` option is specified, then only the job(s) enqueued to a specific queue will be performed. ``` def test_assert_performed_jobs_with_queue_option assert_performed_jobs 1, queue: :some_queue do HelloJob.set(queue: :some_queue).perform_later("jeremy") HelloJob.set(queue: :other_queue).perform_later("bogdan") end end ``` assert\_performed\_with(job: nil, args: nil, at: nil, queue: nil, priority: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 494 def assert_performed_with(job: nil, args: nil, at: nil, queue: nil, priority: nil, &block) expected = { job: job, args: args, at: at, queue: queue, priority: priority }.compact expected_args = prepare_args_for_assertion(expected) potential_matches = [] if block_given? original_performed_jobs_count = performed_jobs.count perform_enqueued_jobs(&block) jobs = performed_jobs.drop(original_performed_jobs_count) else jobs = performed_jobs end matching_job = jobs.find do |enqueued_job| deserialized_job = deserialize_args_for_assertion(enqueued_job) potential_matches << deserialized_job expected_args.all? do |key, value| if value.respond_to?(:call) value.call(deserialized_job[key]) else value == deserialized_job[key] end end end matching_class = potential_matches.select do |enqueued_job| enqueued_job["job_class"] == job.to_s end message = +"No performed job found with #{expected}" if potential_matches.empty? message << "\n\nNo jobs were performed" elsif matching_class.empty? message << "\n\nNo jobs of class #{expected[:job]} were performed, job classes performed: " message << potential_matches.map { |job| job["job_class"] }.join(", ") else message << "\n\nPotential matches: #{matching_class.join("\n")}" end assert matching_job, message instantiate_job(matching_job) end ``` Asserts that the job has been performed with the given arguments. ``` def test_assert_performed_with MyJob.perform_later(1,2,3) perform_enqueued_jobs assert_performed_with(job: MyJob, args: [1,2,3]) MyJob.set(wait_until: Date.tomorrow.noon, queue: "my_queue").perform_later perform_enqueued_jobs assert_performed_with(at: Date.tomorrow.noon, queue: "my_queue") end ``` The given arguments may also be specified as matcher procs that return a boolean value indicating whether a job's attribute meets certain criteria. For example, a proc can be used to match a range of times: ``` def test_assert_performed_with at_matcher = ->(job_at) { (Date.yesterday..Date.tomorrow).cover?(job_at) } MyJob.set(wait_until: Date.today.noon).perform_later perform_enqueued_jobs assert_performed_with(job: MyJob, at: at_matcher) end ``` A proc can also be used to match a subset of a job's args: ``` def test_assert_performed_with args_matcher = ->(job_args) { job_args[0].key?(:foo) } MyJob.perform_later(foo: "bar", other_arg: "No need to check in the test") perform_enqueued_jobs assert_performed_with(job: MyJob, args: args_matcher) end ``` If a block is passed, that block performs all of the jobs that were enqueued throughout the duration of the block and asserts that the job has been performed with the given arguments in the block. ``` def test_assert_performed_with assert_performed_with(job: MyJob, args: [1,2,3]) do MyJob.perform_later(1,2,3) end assert_performed_with(job: MyJob, at: Date.tomorrow.noon) do MyJob.set(wait_until: Date.tomorrow.noon).perform_later end end ``` perform\_enqueued\_jobs(only: nil, except: nil, queue: nil, at: nil, &block) Show source ``` # File activejob/lib/active_job/test_helper.rb, line 598 def perform_enqueued_jobs(only: nil, except: nil, queue: nil, at: nil, &block) return flush_enqueued_jobs(only: only, except: except, queue: queue, at: at) unless block_given? validate_option(only: only, except: except) old_perform_enqueued_jobs = queue_adapter.perform_enqueued_jobs old_perform_enqueued_at_jobs = queue_adapter.perform_enqueued_at_jobs old_filter = queue_adapter.filter old_reject = queue_adapter.reject old_queue = queue_adapter.queue old_at = queue_adapter.at begin queue_adapter.perform_enqueued_jobs = true queue_adapter.perform_enqueued_at_jobs = true queue_adapter.filter = only queue_adapter.reject = except queue_adapter.queue = queue queue_adapter.at = at _assert_nothing_raised_or_warn("perform_enqueued_jobs", &block) ensure queue_adapter.perform_enqueued_jobs = old_perform_enqueued_jobs queue_adapter.perform_enqueued_at_jobs = old_perform_enqueued_at_jobs queue_adapter.filter = old_filter queue_adapter.reject = old_reject queue_adapter.queue = old_queue queue_adapter.at = old_at end end ``` Performs all enqueued jobs. If a block is given, performs all of the jobs that were enqueued throughout the duration of the block. If a block is not given, performs all of the enqueued jobs up to this point in the test. ``` def test_perform_enqueued_jobs perform_enqueued_jobs do MyJob.perform_later(1, 2, 3) end assert_performed_jobs 1 end def test_perform_enqueued_jobs_without_block MyJob.perform_later(1, 2, 3) perform_enqueued_jobs assert_performed_jobs 1 end ``` This method also supports filtering. If the `:only` option is specified, then only the listed job(s) will be performed. ``` def test_perform_enqueued_jobs_with_only perform_enqueued_jobs(only: MyJob) do MyJob.perform_later(1, 2, 3) # will be performed HelloJob.perform_later(1, 2, 3) # will not be performed end assert_performed_jobs 1 end ``` Also if the `:except` option is specified, then the job(s) except specific class will be performed. ``` def test_perform_enqueued_jobs_with_except perform_enqueued_jobs(except: HelloJob) do MyJob.perform_later(1, 2, 3) # will be performed HelloJob.perform_later(1, 2, 3) # will not be performed end assert_performed_jobs 1 end ``` `:only` and `:except` options accept [`Class`](../class), [`Array`](../array) of [`Class`](../class) or Proc. When passed a Proc, an instance of the job will be passed as argument. If the `:queue` option is specified, then only the job(s) enqueued to a specific queue will be performed. ``` def test_perform_enqueued_jobs_with_queue perform_enqueued_jobs queue: :some_queue do MyJob.set(queue: :some_queue).perform_later(1, 2, 3) # will be performed HelloJob.set(queue: :other_queue).perform_later(1, 2, 3) # will not be performed end assert_performed_jobs 1 end ``` If the `:at` option is specified, then only run jobs enqueued to run immediately or before the given time queue\_adapter() Show source ``` # File activejob/lib/active_job/test_helper.rb, line 634 def queue_adapter ActiveJob::Base.queue_adapter end ``` Accesses the [`queue_adapter`](testhelper#method-i-queue_adapter) set by [`ActiveJob::Base`](base). ``` def test_assert_job_has_custom_queue_adapter_set assert_instance_of CustomQueueAdapter, HelloJob.queue_adapter end ``` queue\_adapter\_for\_test() Show source ``` # File activejob/lib/active_job/test_helper.rb, line 66 def queue_adapter_for_test ActiveJob::QueueAdapters::TestAdapter.new end ``` Specifies the queue adapter to use with all Active Job test helpers. Returns an instance of the queue adapter and defaults to `ActiveJob::QueueAdapters::TestAdapter`. Note: The adapter provided by this method must provide some additional methods from those expected of a standard `ActiveJob::QueueAdapter` in order to be used with the active job test helpers. Refer to `ActiveJob::QueueAdapters::TestAdapter`.
programming_docs
rails module ActiveJob::Execution module ActiveJob::Execution ============================ Included modules: [ActiveSupport::Rescuable](../activesupport/rescuable) perform(\*) Show source ``` # File activejob/lib/active_job/execution.rb, line 51 def perform(*) fail NotImplementedError end ``` perform\_now() Show source ``` # File activejob/lib/active_job/execution.rb, line 40 def perform_now # Guard against jobs that were persisted before we started counting executions by zeroing out nil counters self.executions = (executions || 0) + 1 deserialize_arguments_if_needed _perform_job rescue Exception => exception rescue_with_handler(exception) || raise end ``` Performs the job immediately. The job is not sent to the queuing adapter but directly executed by blocking the execution of others until it's finished. `perform_now` returns the value of your job's `perform` method. ``` class MyJob < ActiveJob::Base def perform "Hello World!" end end puts MyJob.new(*args).perform_now # => "Hello World!" ``` rails module ActiveJob::Callbacks module ActiveJob::Callbacks ============================ Included modules: [ActiveSupport::Callbacks](../activesupport/callbacks) Active Job [`Callbacks`](callbacks) =================================== Active Job provides hooks during the life cycle of a job. [`Callbacks`](callbacks) allow you to trigger logic during this cycle. Available callbacks are: * `before_enqueue` * `around_enqueue` * `after_enqueue` * `before_perform` * `around_perform` * `after_perform` NOTE: Calling the same callback multiple times will overwrite previous callback definitions. rails class ActiveJob::Base class ActiveJob::Base ====================== Parent: [Object](../object) Included modules: [ActiveJob::Core](core), ActiveJob::QueueAdapter, [ActiveJob::QueueName](queuename), [ActiveJob::QueuePriority](queuepriority), [ActiveJob::Enqueuing](enqueuing), [ActiveJob::Execution](execution), [ActiveJob::Callbacks](callbacks), [ActiveJob::Exceptions](exceptions) Active Job ========== Active Job objects can be configured to work with different backend queuing frameworks. To specify a queue adapter to use: ``` ActiveJob::Base.queue_adapter = :inline ``` A list of supported adapters can be found in [`QueueAdapters`](queueadapters). Active Job objects can be defined by creating a class that inherits from the [`ActiveJob::Base`](base) class. The only necessary method to implement is the “perform” method. To define an Active Job object: ``` class ProcessPhotoJob < ActiveJob::Base def perform(photo) photo.watermark!('Rails') photo.rotate!(90.degrees) photo.resize_to_fit!(300, 300) photo.upload! end end ``` Records that are passed in are serialized/deserialized using Global ID. More information can be found in [`Arguments`](arguments). To enqueue a job to be performed as soon as the queuing system is free: ``` ProcessPhotoJob.perform_later(photo) ``` To enqueue a job to be processed at some point in the future: ``` ProcessPhotoJob.set(wait_until: Date.tomorrow.noon).perform_later(photo) ``` More information can be found in [`ActiveJob::Core::ClassMethods#set`](core/classmethods#method-i-set) A job can also be processed immediately without sending to the queue: ``` ProcessPhotoJob.perform_now(photo) ``` [`Exceptions`](exceptions) -------------------------- * [`DeserializationError`](deserializationerror) - Error class for deserialization errors. * [`SerializationError`](serializationerror) - Error class for serialization errors. rails module ActiveJob::QueueName module ActiveJob::QueueName ============================ queue\_name() Show source ``` # File activejob/lib/active_job/queue_name.rb, line 62 def queue_name if @queue_name.is_a?(Proc) @queue_name = self.class.queue_name_from_part(instance_exec(&@queue_name)) end @queue_name end ``` Returns the name of the queue the job will be run on. rails module ActiveJob::QueueAdapters module ActiveJob::QueueAdapters ================================ Active Job adapters ------------------- Active Job has adapters for the following queuing backends: * [Backburner](https://github.com/nesquena/backburner) * [Delayed Job](https://github.com/collectiveidea/delayed_job) * [Que](https://github.com/chanks/que) * [queue\_classic](https://github.com/QueueClassic/queue_classic) * [Resque](https://github.com/resque/resque) * [Sidekiq](https://sidekiq.org) * [Sneakers](https://github.com/jondot/sneakers) * [Sucker Punch](https://github.com/brandonhilkert/sucker_punch) * [Active Job Async Job](https://api.rubyonrails.org/classes/ActiveJob/QueueAdapters/AsyncAdapter.html) * [Active Job Inline](https://api.rubyonrails.org/classes/ActiveJob/QueueAdapters/InlineAdapter.html) * Please Note: We are not accepting pull requests for new adapters. See the README for more details. ### Backends Features ``` | | Async | Queues | Delayed | Priorities | Timeout | Retries | |-------------------|-------|--------|------------|------------|---------|---------| | Backburner | Yes | Yes | Yes | Yes | Job | Global | | Delayed Job | Yes | Yes | Yes | Job | Global | Global | | Que | Yes | Yes | Yes | Job | No | Job | | queue_classic | Yes | Yes | Yes* | No | No | No | | Resque | Yes | Yes | Yes (Gem) | Queue | Global | Yes | | Sidekiq | Yes | Yes | Yes | Queue | No | Job | | Sneakers | Yes | Yes | No | Queue | Queue | No | | Sucker Punch | Yes | Yes | Yes | No | No | No | | Active Job Async | Yes | Yes | Yes | No | No | No | | Active Job Inline | No | Yes | N/A | N/A | N/A | N/A | ``` #### Async Yes: The Queue Adapter has the ability to run the job in a non-blocking manner. It either runs on a separate or forked process, or on a different thread. No: The job is run in the same process. #### Queues Yes: Jobs may set which queue they are run in with queue\_as or by using the set method. #### Delayed Yes: The adapter will run the job in the future through perform\_later. (Gem): An additional gem is required to use perform\_later with this adapter. No: The adapter will run jobs at the next opportunity and cannot use perform\_later. N/A: The adapter does not support queuing. NOTE: queue\_classic supports job scheduling since version 3.1. For older versions you can use the queue\_classic-later gem. #### Priorities The order in which jobs are processed can be configured differently depending on the adapter. Job: Any class inheriting from the adapter may set the priority on the job object relative to other jobs. Queue: The adapter can set the priority for job queues, when setting a queue with Active Job this will be respected. Yes: Allows the priority to be set on the job object, at the queue level or as default configuration option. No: The adapter does not allow the priority of jobs to be configured. N/A: The adapter does not support queuing, and therefore sorting them. #### Timeout When a job will stop after the allotted time. Job: The timeout can be set for each instance of the job class. Queue: The timeout is set for all jobs on the queue. Global: The adapter is configured that all jobs have a maximum run time. No: The adapter does not allow the timeout of jobs to be configured. N/A: This adapter does not run in a separate process, and therefore timeout is unsupported. #### Retries Job: The number of retries can be set per instance of the job class. Yes: The Number of retries can be configured globally, for each instance or on the queue. This adapter may also present failed instances of the job class that can be restarted. Global: The adapter has a global number of retries. No: The adapter does not allow the number of retries to be configured. N/A: The adapter does not run in a separate process, and therefore doesn't support retries. ### Async and Inline Queue Adapters Active Job has two built-in queue adapters intended for development and testing: `:async` and `:inline`. lookup(name) Show source ``` # File activejob/lib/active_job/queue_adapters.rb, line 136 def lookup(name) const_get(name.to_s.camelize << ADAPTER) end ``` Returns adapter for specified name. ``` ActiveJob::QueueAdapters.lookup(:sidekiq) # => ActiveJob::QueueAdapters::SidekiqAdapter ``` rails module ActiveJob::Core module ActiveJob::Core ======================= Provides general behavior that will be included into every Active Job object that inherits from [`ActiveJob::Base`](base). arguments[RW] Job arguments enqueue\_error[RW] Track any exceptions raised by the backend so callers can inspect the errors. enqueued\_at[RW] Track when a job was enqueued exception\_executions[RW] [`Hash`](../hash) that contains the number of times this job handled errors for each specific retry\_on declaration. Keys are the string representation of the exceptions listed in the retry\_on declaration, while its associated value holds the number of executions where the corresponding retry\_on declaration handled one of its listed exceptions. executions[RW] Number of times this job has been executed (which increments on every retry, like after an exception). job\_id[RW] Job Identifier locale[RW] I18n.locale to be used during the job. priority[W] Priority that the job will have (lower is more priority). provider\_job\_id[RW] ID optionally provided by adapter queue\_name[W] Queue in which the job will reside. scheduled\_at[RW] Timestamp when the job should be performed serialized\_arguments[W] timezone[RW] Timezone to be used during the job. new(\*arguments) Show source ``` # File activejob/lib/active_job/core.rb, line 91 def initialize(*arguments) @arguments = arguments @job_id = SecureRandom.uuid @queue_name = self.class.queue_name @priority = self.class.priority @executions = 0 @exception_executions = {} @timezone = Time.zone&.name end ``` Creates a new job instance. Takes the arguments that will be passed to the perform method. deserialize(job\_data) Show source ``` # File activejob/lib/active_job/core.rb, line 146 def deserialize(job_data) self.job_id = job_data["job_id"] self.provider_job_id = job_data["provider_job_id"] self.queue_name = job_data["queue_name"] self.priority = job_data["priority"] self.serialized_arguments = job_data["arguments"] self.executions = job_data["executions"] self.exception_executions = job_data["exception_executions"] self.locale = job_data["locale"] || I18n.locale.to_s self.timezone = job_data["timezone"] || Time.zone&.name self.enqueued_at = job_data["enqueued_at"] end ``` Attaches the stored job data to the current instance. Receives a hash returned from `serialize` #### Examples ``` class DeliverWebhookJob < ActiveJob::Base attr_writer :attempt_number def attempt_number @attempt_number ||= 0 end def serialize super.merge('attempt_number' => attempt_number + 1) end def deserialize(job_data) super self.attempt_number = job_data['attempt_number'] end rescue_from(Timeout::Error) do |exception| raise exception if attempt_number > 5 retry_job(wait: 10) end end ``` serialize() Show source ``` # File activejob/lib/active_job/core.rb, line 104 def serialize { "job_class" => self.class.name, "job_id" => job_id, "provider_job_id" => provider_job_id, "queue_name" => queue_name, "priority" => priority, "arguments" => serialize_arguments_if_needed(arguments), "executions" => executions, "exception_executions" => exception_executions, "locale" => I18n.locale.to_s, "timezone" => timezone, "enqueued_at" => Time.now.utc.iso8601 } end ``` Returns a hash with the job data that can safely be passed to the queuing adapter. successfully\_enqueued?() Show source ``` # File activejob/lib/active_job/core.rb, line 49 def successfully_enqueued? @successfully_enqueued end ``` rails module ActiveJob::Exceptions module ActiveJob::Exceptions ============================= Provides behavior for retrying and discarding jobs on exceptions. retry\_job(options = {}) Show source ``` # File activejob/lib/active_job/exceptions.rb, line 124 def retry_job(options = {}) instrument :enqueue_retry, options.slice(:error, :wait) do enqueue options end end ``` Reschedules the job to be re-executed. This is useful in combination with the `rescue_from` option. When you rescue an exception from your job you can ask Active Job to retry performing your job. #### Options * `:wait` - Enqueues the job with the specified delay in seconds * `:wait_until` - Enqueues the job at the time specified * `:queue` - Enqueues the job on the specified queue * `:priority` - Enqueues the job with the specified priority #### Examples ``` class SiteScraperJob < ActiveJob::Base rescue_from(ErrorLoadingSite) do retry_job queue: :low_priority end def perform(*args) # raise ErrorLoadingSite if cannot scrape end end ``` rails class ActiveJob::SerializationError class ActiveJob::SerializationError ==================================== Parent: ArgumentError Raised when an unsupported argument type is set as a job argument. We currently support [`String`](../string), [`Integer`](../integer), `Float`, [`NilClass`](../nilclass), [`TrueClass`](../trueclass), [`FalseClass`](../falseclass), `BigDecimal`, `Symbol`, [`Date`](../date), [`Time`](../time), [`DateTime`](../datetime), [`ActiveSupport::TimeWithZone`](../activesupport/timewithzone), [`ActiveSupport::Duration`](../activesupport/duration), [`Hash`](../hash), [`ActiveSupport::HashWithIndifferentAccess`](../activesupport/hashwithindifferentaccess), [`Array`](../array), [`Range`](../range) or GlobalID::Identification instances, although this can be extended by adding custom serializers. Raised if you set the key for a [`Hash`](../hash) something else than a string or a symbol. Also raised when trying to serialize an object which can't be identified with a GlobalID - such as an unpersisted Active Record model. rails module ActiveJob::Enqueuing module ActiveJob::Enqueuing ============================ enqueue(options = {}) Show source ``` # File activejob/lib/active_job/enqueuing.rb, line 59 def enqueue(options = {}) set(options) self.successfully_enqueued = false run_callbacks :enqueue do if scheduled_at queue_adapter.enqueue_at self, scheduled_at else queue_adapter.enqueue self end self.successfully_enqueued = true rescue EnqueueError => e self.enqueue_error = e end if successfully_enqueued? self else false end end ``` Enqueues the job to be performed by the queue adapter. #### Options * `:wait` - Enqueues the job with the specified delay * `:wait_until` - Enqueues the job at the time specified * `:queue` - Enqueues the job on the specified queue * `:priority` - Enqueues the job with the specified priority #### Examples ``` my_job_instance.enqueue my_job_instance.enqueue wait: 5.minutes my_job_instance.enqueue queue: :important my_job_instance.enqueue wait_until: Date.tomorrow.midnight my_job_instance.enqueue priority: 10 ``` rails module ActiveJob::QueuePriority module ActiveJob::QueuePriority ================================ priority() Show source ``` # File activejob/lib/active_job/queue_priority.rb, line 36 def priority if @priority.is_a?(Proc) @priority = instance_exec(&@priority) end @priority end ``` Returns the priority that the job will be created with rails class ActiveJob::EnqueueError class ActiveJob::EnqueueError ============================== Parent: StandardError Can be raised by adapters if they wish to communicate to the caller a reason why the adapter was unexpectedly unable to enqueue a job. rails class ActiveJob::DeserializationError class ActiveJob::DeserializationError ====================================== Parent: StandardError Raised when an exception is raised during job arguments deserialization. Wraps the original exception raised as `cause`. rails module ActiveJob::Arguments module ActiveJob::Arguments ============================ deserialize(arguments) Show source ``` # File activejob/lib/active_job/arguments.rb, line 41 def deserialize(arguments) arguments.map { |argument| deserialize_argument(argument) } rescue raise DeserializationError end ``` Deserializes a set of arguments. Intrinsic types that can safely be deserialized without mutation are returned as-is. Arrays/Hashes are deserialized element by element. All other types are deserialized using GlobalID. serialize(arguments) Show source ``` # File activejob/lib/active_job/arguments.rb, line 33 def serialize(arguments) arguments.map { |argument| serialize_argument(argument) } end ``` Serializes a set of arguments. Intrinsic types that can safely be serialized without mutation are returned as-is. Arrays/Hashes are serialized element by element. All other types are serialized using GlobalID. rails module ActiveJob::QueuePriority::ClassMethods module ActiveJob::QueuePriority::ClassMethods ============================================== Includes the ability to override the default queue priority. queue\_with\_priority(priority = nil, &block) Show source ``` # File activejob/lib/active_job/queue_priority.rb, line 22 def queue_with_priority(priority = nil, &block) if block_given? self.priority = block else self.priority = priority end end ``` Specifies the priority of the queue to create the job with. ``` class PublishToFeedJob < ActiveJob::Base queue_with_priority 50 def perform(post) post.to_feed! end end ``` Specify either an argument or a block. rails module ActiveJob::Core::ClassMethods module ActiveJob::Core::ClassMethods ===================================== These methods will be included into any Active Job object, adding helpers for de/serialization and creation of job instances. deserialize(job\_data) Show source ``` # File activejob/lib/active_job/core.rb, line 60 def deserialize(job_data) job = job_data["job_class"].constantize.new job.deserialize(job_data) job end ``` Creates a new job instance from a hash created with `serialize` set(options = {}) Show source ``` # File activejob/lib/active_job/core.rb, line 84 def set(options = {}) ConfiguredJob.new(self, options) end ``` Creates a job preconfigured with the given options. You can call perform\_later with the job arguments to enqueue the job with the preconfigured options #### Options * `:wait` - Enqueues the job with the specified delay * `:wait_until` - Enqueues the job at the time specified * `:queue` - Enqueues the job on the specified queue * `:priority` - Enqueues the job with the specified priority #### Examples ``` VideoJob.set(queue: :some_queue).perform_later(Video.last) VideoJob.set(wait: 5.minutes).perform_later(Video.last) VideoJob.set(wait_until: Time.now.tomorrow).perform_later(Video.last) VideoJob.set(queue: :some_queue, wait: 5.minutes).perform_later(Video.last) VideoJob.set(queue: :some_queue, wait_until: Time.now.tomorrow).perform_later(Video.last) VideoJob.set(queue: :some_queue, wait: 5.minutes, priority: 10).perform_later(Video.last) ``` rails module ActiveJob::Callbacks::ClassMethods module ActiveJob::Callbacks::ClassMethods ========================================== These methods will be included into any Active Job object, adding callbacks for `perform` and `enqueue` methods. after\_enqueue(\*filters, &blk) Show source ``` # File activejob/lib/active_job/callbacks.rb, line 147 def after_enqueue(*filters, &blk) set_callback(:enqueue, :after, *filters, &blk) end ``` Defines a callback that will get called right after the job is enqueued. ``` class VideoProcessJob < ActiveJob::Base queue_as :default after_enqueue do |job| $statsd.increment "enqueue-video-job.success" end def perform(video_id) Video.find(video_id).process end end ``` after\_perform(\*filters, &blk) Show source ``` # File activejob/lib/active_job/callbacks.rb, line 76 def after_perform(*filters, &blk) set_callback(:perform, :after, *filters, &blk) end ``` Defines a callback that will get called right after the job's perform method has finished. ``` class VideoProcessJob < ActiveJob::Base queue_as :default after_perform do |job| UserMailer.notify_video_processed(job.arguments.first) end def perform(video_id) Video.find(video_id).process end end ``` around\_enqueue(\*filters, &blk) Show source ``` # File activejob/lib/active_job/callbacks.rb, line 168 def around_enqueue(*filters, &blk) set_callback(:enqueue, :around, *filters, &blk) end ``` Defines a callback that will get called around the enqueuing of the job. ``` class VideoProcessJob < ActiveJob::Base queue_as :default around_enqueue do |job, block| $statsd.time "video-job.process" do block.call end end def perform(video_id) Video.find(video_id).process end end ``` around\_perform(\*filters, &blk) Show source ``` # File activejob/lib/active_job/callbacks.rb, line 109 def around_perform(*filters, &blk) set_callback(:perform, :around, *filters, &blk) end ``` Defines a callback that will get called around the job's perform method. ``` class VideoProcessJob < ActiveJob::Base queue_as :default around_perform do |job, block| UserMailer.notify_video_started_processing(job.arguments.first) block.call UserMailer.notify_video_processed(job.arguments.first) end def perform(video_id) Video.find(video_id).process end end ``` You can access the return value of the job only if the execution wasn't halted. ``` class VideoProcessJob < ActiveJob::Base around_perform do |job, block| value = block.call puts value # => "Hello World!" end def perform "Hello World!" end end ``` before\_enqueue(\*filters, &blk) Show source ``` # File activejob/lib/active_job/callbacks.rb, line 128 def before_enqueue(*filters, &blk) set_callback(:enqueue, :before, *filters, &blk) end ``` Defines a callback that will get called right before the job is enqueued. ``` class VideoProcessJob < ActiveJob::Base queue_as :default before_enqueue do |job| $statsd.increment "enqueue-video-job.try" end def perform(video_id) Video.find(video_id).process end end ``` before\_perform(\*filters, &blk) Show source ``` # File activejob/lib/active_job/callbacks.rb, line 57 def before_perform(*filters, &blk) set_callback(:perform, :before, *filters, &blk) end ``` Defines a callback that will get called right before the job's perform method is executed. ``` class VideoProcessJob < ActiveJob::Base queue_as :default before_perform do |job| UserMailer.notify_video_started_processing(job.arguments.first) end def perform(video_id) Video.find(video_id).process end end ```
programming_docs
rails module ActiveJob::Enqueuing::ClassMethods module ActiveJob::Enqueuing::ClassMethods ========================================== Includes the `perform_later` method for job initialization. perform\_later(...) { |job| ... } Show source ``` # File activejob/lib/active_job/enqueuing.rb, line 28 def perform_later(...) job = job_or_instantiate(...) enqueue_result = job.enqueue yield job if block_given? enqueue_result end ``` Push a job onto the queue. By default the arguments must be either [`String`](../../string), [`Integer`](../../integer), `Float`, [`NilClass`](../../nilclass), [`TrueClass`](../../trueclass), [`FalseClass`](../../falseclass), `BigDecimal`, `Symbol`, [`Date`](../../date), [`Time`](../../time), [`DateTime`](../../datetime), [`ActiveSupport::TimeWithZone`](../../activesupport/timewithzone), [`ActiveSupport::Duration`](../../activesupport/duration), [`Hash`](../../hash), [`ActiveSupport::HashWithIndifferentAccess`](../../activesupport/hashwithindifferentaccess), [`Array`](../../array), [`Range`](../../range) or GlobalID::Identification instances, although this can be extended by adding custom serializers. Returns an instance of the job class queued with arguments available in Job#arguments or false if the enqueue did not succeed. After the attempted enqueue, the job will be yielded to an optional block. job\_or\_instantiate(\*args) Show source ``` # File activejob/lib/active_job/enqueuing.rb, line 38 def job_or_instantiate(*args) # :doc: args.first.is_a?(self) ? args.first : new(*args) end ``` rails class ActiveJob::QueueAdapters::QueueClassicAdapter class ActiveJob::QueueAdapters::QueueClassicAdapter ==================================================== Parent: [Object](../../object) queue\_classic adapter for Active Job ------------------------------------- queue\_classic provides a simple interface to a PostgreSQL-backed message queue. queue\_classic specializes in concurrent locking and minimizing database load while providing a simple, intuitive developer experience. queue\_classic assumes that you are already using PostgreSQL in your production environment and that adding another dependency (e.g. redis, beanstalkd, 0mq) is undesirable. Read more about queue\_classic [here](https://github.com/QueueClassic/queue_classic). To use queue\_classic set the queue\_adapter config to `:queue_classic`. ``` Rails.application.config.active_job.queue_adapter = :queue_classic ``` build\_queue(queue\_name) Show source ``` # File activejob/lib/active_job/queue_adapters/queue_classic_adapter.rb, line 45 def build_queue(queue_name) QC::Queue.new(queue_name) end ``` Builds a `QC::Queue` object to schedule jobs on. If you have a custom `QC::Queue` subclass you'll need to subclass `ActiveJob::QueueAdapters::QueueClassicAdapter` and override the `build_queue` method. rails class ActiveJob::QueueAdapters::ResqueAdapter class ActiveJob::QueueAdapters::ResqueAdapter ============================================== Parent: [Object](../../object) Resque adapter for Active Job ----------------------------- Resque (pronounced like “rescue”) is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and processing them later. Read more about Resque [here](https://github.com/resque/resque). To use Resque set the queue\_adapter config to `:resque`. ``` Rails.application.config.active_job.queue_adapter = :resque ``` rails class ActiveJob::QueueAdapters::QueAdapter class ActiveJob::QueueAdapters::QueAdapter =========================================== Parent: [Object](../../object) Que adapter for Active Job -------------------------- Que is a high-performance alternative to DelayedJob or QueueClassic that improves the reliability of your application by protecting your jobs with the same ACID guarantees as the rest of your data. Que is a queue for Ruby and PostgreSQL that manages jobs using advisory locks. Read more about Que [here](https://github.com/chanks/que). To use Que set the queue\_adapter config to `:que`. ``` Rails.application.config.active_job.queue_adapter = :que ``` rails class ActiveJob::QueueAdapters::BackburnerAdapter class ActiveJob::QueueAdapters::BackburnerAdapter ================================================== Parent: [Object](../../object) Backburner adapter for Active Job --------------------------------- Backburner is a beanstalkd-powered job queue that can handle a very high volume of jobs. You create background jobs and place them on multiple work queues to be processed later. Read more about Backburner [here](https://github.com/nesquena/backburner). To use Backburner set the queue\_adapter config to `:backburner`. ``` Rails.application.config.active_job.queue_adapter = :backburner ``` rails class ActiveJob::QueueAdapters::SidekiqAdapter class ActiveJob::QueueAdapters::SidekiqAdapter =============================================== Parent: [Object](../../object) Sidekiq adapter for Active Job ------------------------------ Simple, efficient background processing for Ruby. Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with it to make background processing dead simple. Read more about Sidekiq [here](http://sidekiq.org). To use Sidekiq set the queue\_adapter config to `:sidekiq`. ``` Rails.application.config.active_job.queue_adapter = :sidekiq ``` rails class ActiveJob::QueueAdapters::SneakersAdapter class ActiveJob::QueueAdapters::SneakersAdapter ================================================ Parent: [Object](../../object) Sneakers adapter for Active Job ------------------------------- A high-performance RabbitMQ background processing framework for Ruby. Sneakers is being used in production for both I/O and CPU intensive workloads, and have achieved the goals of high-performance and 0-maintenance, as designed. Read more about Sneakers [here](https://github.com/jondot/sneakers). To use Sneakers set the queue\_adapter config to `:sneakers`. ``` Rails.application.config.active_job.queue_adapter = :sneakers ``` new() Show source ``` # File activejob/lib/active_job/queue_adapters/sneakers_adapter.rb, line 21 def initialize @monitor = Monitor.new end ``` rails class ActiveJob::QueueAdapters::DelayedJobAdapter class ActiveJob::QueueAdapters::DelayedJobAdapter ================================================== Parent: [Object](../../object) Delayed Job adapter for Active Job ---------------------------------- Delayed::Job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. Although DJ can have many storage backends, one of the most used is based on Active Record. Read more about Delayed Job [here](https://github.com/collectiveidea/delayed_job). To use Delayed Job, set the queue\_adapter config to `:delayed_job`. ``` Rails.application.config.active_job.queue_adapter = :delayed_job ``` rails class ActiveJob::QueueAdapters::InlineAdapter class ActiveJob::QueueAdapters::InlineAdapter ============================================== Parent: [Object](../../object) Active Job Inline adapter ------------------------- When enqueuing jobs with the Inline adapter the job will be executed immediately. To use the Inline set the queue\_adapter config to `:inline`. ``` Rails.application.config.active_job.queue_adapter = :inline ``` rails class ActiveJob::QueueAdapters::AsyncAdapter class ActiveJob::QueueAdapters::AsyncAdapter ============================================= Parent: [Object](../../object) Active Job Async adapter ------------------------ The Async adapter runs jobs with an in-process thread pool. This is the default queue adapter. It's well-suited for dev/test since it doesn't need an external infrastructure, but it's a poor fit for production since it drops pending jobs on restart. To use this adapter, set queue adapter to `:async`: ``` config.active_job.queue_adapter = :async ``` To configure the adapter's thread pool, instantiate the adapter and pass your own config: ``` config.active_job.queue_adapter = ActiveJob::QueueAdapters::AsyncAdapter.new \ min_threads: 1, max_threads: 2 * Concurrent.processor_count, idletime: 600.seconds ``` The adapter uses a [Concurrent Ruby](https://github.com/ruby-concurrency/concurrent-ruby) thread pool to schedule and execute jobs. Since jobs share a single thread pool, long-running jobs will block short-lived jobs. Fine for dev/test; bad for production. new(\*\*executor\_options) Show source ``` # File activejob/lib/active_job/queue_adapters/async_adapter.rb, line 35 def initialize(**executor_options) @scheduler = Scheduler.new(**executor_options) end ``` See [Concurrent::ThreadPoolExecutor](https://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/ThreadPoolExecutor.html) for executor options. rails class ActiveJob::QueueAdapters::TestAdapter class ActiveJob::QueueAdapters::TestAdapter ============================================ Parent: [Object](../../object) Test adapter for Active Job --------------------------- The test adapter should be used only in testing. Along with `ActiveJob::TestCase` and `ActiveJob::TestHelper` it makes a great tool to test your Rails application. To use the test adapter set queue\_adapter config to `:test`. ``` Rails.application.config.active_job.queue_adapter = :test ``` at[RW] enqueued\_jobs[W] filter[RW] perform\_enqueued\_at\_jobs[RW] perform\_enqueued\_jobs[RW] performed\_jobs[W] queue[RW] reject[RW] enqueued\_jobs() Show source ``` # File activejob/lib/active_job/queue_adapters/test_adapter.rb, line 19 def enqueued_jobs @enqueued_jobs ||= [] end ``` Provides a store of all the enqueued jobs with the [`TestAdapter`](testadapter) so you can check them. performed\_jobs() Show source ``` # File activejob/lib/active_job/queue_adapters/test_adapter.rb, line 24 def performed_jobs @performed_jobs ||= [] end ``` Provides a store of all the performed jobs with the [`TestAdapter`](testadapter) so you can check them. rails class ActiveJob::QueueAdapters::SuckerPunchAdapter class ActiveJob::QueueAdapters::SuckerPunchAdapter =================================================== Parent: [Object](../../object) Sucker Punch adapter for Active Job ----------------------------------- Sucker Punch is a single-process Ruby asynchronous processing library. This reduces the cost of hosting on a service like Heroku along with the memory footprint of having to maintain additional jobs if hosting on a dedicated server. All queues can run within a single application (e.g. Rails, Sinatra, etc.) process. Read more about Sucker Punch [here](https://github.com/brandonhilkert/sucker_punch). To use Sucker Punch set the queue\_adapter config to `:sucker_punch`. ``` Rails.application.config.active_job.queue_adapter = :sucker_punch ``` rails module ActiveJob::QueueAdapter::ClassMethods module ActiveJob::QueueAdapter::ClassMethods ============================================= Includes the setter method for changing the active queue adapter. QUEUE\_ADAPTER\_METHODS queue\_adapter() Show source ``` # File activejob/lib/active_job/queue_adapter.rb, line 24 def queue_adapter _queue_adapter end ``` Returns the backend queue provider. The default queue adapter is the `:async` queue. See [`QueueAdapters`](../queueadapters) for more information. queue\_adapter=(name\_or\_adapter) Show source ``` # File activejob/lib/active_job/queue_adapter.rb, line 37 def queue_adapter=(name_or_adapter) case name_or_adapter when Symbol, String queue_adapter = ActiveJob::QueueAdapters.lookup(name_or_adapter).new assign_adapter(name_or_adapter.to_s, queue_adapter) else if queue_adapter?(name_or_adapter) adapter_name = "#{name_or_adapter.class.name.demodulize.remove('Adapter').underscore}" assign_adapter(adapter_name, name_or_adapter) else raise ArgumentError end end end ``` Specify the backend queue provider. The default queue adapter is the `:async` queue. See [`QueueAdapters`](../queueadapters) for more information. queue\_adapter\_name() Show source ``` # File activejob/lib/active_job/queue_adapter.rb, line 30 def queue_adapter_name _queue_adapter_name end ``` Returns string denoting the name of the configured queue adapter. By default returns `"async"`. rails module ActiveJob::Exceptions::ClassMethods module ActiveJob::Exceptions::ClassMethods =========================================== discard\_on(\*exceptions) { |self, error| ... } Show source ``` # File activejob/lib/active_job/exceptions.rb, line 94 def discard_on(*exceptions) rescue_from(*exceptions) do |error| instrument :discard, error: error do yield self, error if block_given? end end end ``` Discard the job with no attempts to retry, if the exception is raised. This is useful when the subject of the job, like an Active Record, is no longer available, and the job is thus no longer relevant. You can also pass a block that'll be invoked. This block is yielded with the job instance as the first and the error instance as the second parameter. #### Example ``` class SearchIndexingJob < ActiveJob::Base discard_on ActiveJob::DeserializationError discard_on(CustomAppException) do |job, error| ExceptionNotifier.caught(error) end def perform(record) # Will raise ActiveJob::DeserializationError if the record can't be deserialized # Might raise CustomAppException for something domain specific end end ``` retry\_on(\*exceptions, wait: 3.seconds, attempts: 5, queue: nil, priority: nil, jitter: JITTER\_DEFAULT) { |self, error| ... } Show source ``` # File activejob/lib/active_job/exceptions.rb, line 58 def retry_on(*exceptions, wait: 3.seconds, attempts: 5, queue: nil, priority: nil, jitter: JITTER_DEFAULT) rescue_from(*exceptions) do |error| executions = executions_for(exceptions) if attempts == :unlimited || executions < attempts retry_job wait: determine_delay(seconds_or_duration_or_algorithm: wait, executions: executions, jitter: jitter), queue: queue, priority: priority, error: error else if block_given? instrument :retry_stopped, error: error do yield self, error end else instrument :retry_stopped, error: error raise error end end end end ``` Catch the exception and reschedule job for re-execution after so many seconds, for a specific number of attempts. If the exception keeps getting raised beyond the specified number of attempts, the exception is allowed to bubble up to the underlying queuing system, which may have its own retry mechanism or place it in a holding queue for inspection. You can also pass a block that'll be invoked if the retry attempts fail for custom logic rather than letting the exception bubble up. This block is yielded with the job instance as the first and the error instance as the second parameter. #### Options * `:wait` - Re-enqueues the job with a delay specified either in seconds (default: 3 seconds), as a computing proc that takes the number of executions so far as an argument, or as a symbol reference of `:exponentially_longer`, which applies the wait algorithm of `((executions**4) + (Kernel.rand * (executions**4) * jitter)) + 2` (first wait ~3s, then ~18s, then ~83s, etc) * `:attempts` - Re-enqueues the job the specified number of times (default: 5 attempts) or a symbol reference of `:unlimited` to retry the job until it succeeds * `:queue` - Re-enqueues the job on a different queue * `:priority` - Re-enqueues the job with a different priority * `:jitter` - A random delay of wait time used when calculating backoff. The default is 15% (0.15) which represents the upper bound of possible wait time (expressed as a percentage) #### Examples ``` class RemoteServiceJob < ActiveJob::Base retry_on CustomAppException # defaults to ~3s wait, 5 attempts retry_on AnotherCustomAppException, wait: ->(executions) { executions * 2 } retry_on CustomInfrastructureException, wait: 5.minutes, attempts: :unlimited retry_on ActiveRecord::Deadlocked, wait: 5.seconds, attempts: 3 retry_on Net::OpenTimeout, Timeout::Error, wait: :exponentially_longer, attempts: 10 # retries at most 10 times for Net::OpenTimeout and Timeout::Error combined # To retry at most 10 times for each individual exception: # retry_on Net::OpenTimeout, wait: :exponentially_longer, attempts: 10 # retry_on Net::ReadTimeout, wait: 5.seconds, jitter: 0.30, attempts: 10 # retry_on Timeout::Error, wait: :exponentially_longer, attempts: 10 retry_on(YetAnotherCustomAppException) do |job, error| ExceptionNotifier.caught(error) end def perform(*args) # Might raise CustomAppException, AnotherCustomAppException, or YetAnotherCustomAppException for something domain specific # Might raise ActiveRecord::Deadlocked when a local db deadlock is detected # Might raise Net::OpenTimeout or Timeout::Error when the remote service is down end end ``` rails class ActiveJob::Serializers::ObjectSerializer class ActiveJob::Serializers::ObjectSerializer =============================================== Parent: [Object](../../object) Included modules: [Singleton](../../singleton) [`Base`](../base) class for serializing and deserializing custom objects. Example: ``` class MoneySerializer < ActiveJob::Serializers::ObjectSerializer def serialize(money) super("amount" => money.amount, "currency" => money.currency) end def deserialize(hash) Money.new(hash["amount"], hash["currency"]) end private def klass Money end end ``` deserialize(json) Show source ``` # File activejob/lib/active_job/serializers/object_serializer.rb, line 42 def deserialize(json) raise NotImplementedError end ``` Deserializes an argument from a JSON primitive type. serialize(hash) Show source ``` # File activejob/lib/active_job/serializers/object_serializer.rb, line 37 def serialize(hash) { Arguments::OBJECT_SERIALIZER_KEY => self.class.name }.merge!(hash) end ``` Serializes an argument to a JSON primitive type. serialize?(argument) Show source ``` # File activejob/lib/active_job/serializers/object_serializer.rb, line 32 def serialize?(argument) argument.is_a?(klass) end ``` Determines if an argument should be serialized by a serializer. klass() Show source ``` # File activejob/lib/active_job/serializers/object_serializer.rb, line 48 def klass # :doc: raise NotImplementedError end ``` The class of the object that will be serialized. rails module ActiveJob::Execution::ClassMethods module ActiveJob::Execution::ClassMethods ========================================== Includes methods for executing and performing jobs instantly. perform\_now(...) Show source ``` # File activejob/lib/active_job/execution.rb, line 17 def perform_now(...) job_or_instantiate(...).perform_now end ``` Performs the job immediately. ``` MyJob.perform_now("mike") ``` rails module ActiveJob::QueueName::ClassMethods module ActiveJob::QueueName::ClassMethods ========================================== Includes the ability to override the default queue name and prefix. queue\_as(part\_name = nil, &block) Show source ``` # File activejob/lib/active_job/queue_name.rb, line 40 def queue_as(part_name = nil, &block) if block_given? self.queue_name = block else self.queue_name = queue_name_from_part(part_name) end end ``` Specifies the name of the queue to process the job on. ``` class PublishToFeedJob < ActiveJob::Base queue_as :feeds def perform(post) post.to_feed! end end ``` Can be given a block that will evaluate in the context of the job allowing `self.arguments` to be accessed so that a dynamic queue name can be applied: ``` class PublishToFeedJob < ApplicationJob queue_as do post = self.arguments.first if post.paid? :paid_feeds else :feeds end end def perform(post) post.to_feed! end end ```
programming_docs
rails module DateAndTime::Zones module DateAndTime::Zones ========================== in\_time\_zone(zone = ::Time.zone) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/zones.rb, line 20 def in_time_zone(zone = ::Time.zone) time_zone = ::Time.find_zone! zone time = acts_like?(:time) ? self : nil if time_zone time_with_zone(time, time_zone) else time || to_time end end ``` Returns the simultaneous time in `Time.zone` if a zone is given or if [`Time.zone_default`](../time#attribute-c-zone_default) is set. Otherwise, it returns the current time. ``` Time.zone = 'Hawaii' # => 'Hawaii' Time.utc(2000).in_time_zone # => Fri, 31 Dec 1999 14:00:00 HST -10:00 Date.new(2000).in_time_zone # => Sat, 01 Jan 2000 00:00:00 HST -10:00 ``` This method is similar to Time#localtime, except that it uses `Time.zone` as the local zone instead of the operating system's time zone. You can also pass in a TimeZone instance or string that identifies a TimeZone as an argument, and the conversion will be based on that zone instead of `Time.zone`. ``` Time.utc(2000).in_time_zone('Alaska') # => Fri, 31 Dec 1999 15:00:00 AKST -09:00 Date.new(2000).in_time_zone('Alaska') # => Sat, 01 Jan 2000 00:00:00 AKST -09:00 ``` rails module DateAndTime::Calculations module DateAndTime::Calculations ================================= DAYS\_INTO\_WEEK WEEKEND\_DAYS after?(date\_or\_time) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 72 def after?(date_or_time) self > date_or_time end ``` Returns true if the date/time falls after `date_or_time`. all\_day() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 300 def all_day beginning_of_day..end_of_day end ``` Returns a [`Range`](../range) representing the whole day of the current date/time. all\_month() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 311 def all_month beginning_of_month..end_of_month end ``` Returns a [`Range`](../range) representing the whole month of the current date/time. all\_quarter() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 316 def all_quarter beginning_of_quarter..end_of_quarter end ``` Returns a [`Range`](../range) representing the whole quarter of the current date/time. all\_week(start\_day = Date.beginning\_of\_week) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 306 def all_week(start_day = Date.beginning_of_week) beginning_of_week(start_day)..end_of_week(start_day) end ``` Returns a [`Range`](../range) representing the whole week of the current date/time. Week starts on start\_day, default is `Date.beginning_of_week` or `config.beginning_of_week` when set. all\_year() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 321 def all_year beginning_of_year..end_of_year end ``` Returns a [`Range`](../range) representing the whole year of the current date/time. at\_beginning\_of\_month() Alias for: [beginning\_of\_month](calculations#method-i-beginning_of_month) at\_beginning\_of\_quarter() Alias for: [beginning\_of\_quarter](calculations#method-i-beginning_of_quarter) at\_beginning\_of\_week(start\_day = Date.beginning\_of\_week) Alias for: [beginning\_of\_week](calculations#method-i-beginning_of_week) at\_beginning\_of\_year() Alias for: [beginning\_of\_year](calculations#method-i-beginning_of_year) at\_end\_of\_month() Alias for: [end\_of\_month](calculations#method-i-end_of_month) at\_end\_of\_quarter() Alias for: [end\_of\_quarter](calculations#method-i-end_of_quarter) at\_end\_of\_week(start\_day = Date.beginning\_of\_week) Alias for: [end\_of\_week](calculations#method-i-end_of_week) at\_end\_of\_year() Alias for: [end\_of\_year](calculations#method-i-end_of_year) before?(date\_or\_time) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 67 def before?(date_or_time) self < date_or_time end ``` Returns true if the date/time falls before `date_or_time`. beginning\_of\_month() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 125 def beginning_of_month first_hour(change(day: 1)) end ``` Returns a new date/time at the start of the month. ``` today = Date.today # => Thu, 18 Jun 2015 today.beginning_of_month # => Mon, 01 Jun 2015 ``` `DateTime` objects will have a time set to 0:00. ``` now = DateTime.current # => Thu, 18 Jun 2015 15:23:13 +0000 now.beginning_of_month # => Mon, 01 Jun 2015 00:00:00 +0000 ``` Also aliased as: [at\_beginning\_of\_month](calculations#method-i-at_beginning_of_month) beginning\_of\_quarter() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 139 def beginning_of_quarter first_quarter_month = month - (2 + month) % 3 beginning_of_month.change(month: first_quarter_month) end ``` Returns a new date/time at the start of the quarter. ``` today = Date.today # => Fri, 10 Jul 2015 today.beginning_of_quarter # => Wed, 01 Jul 2015 ``` `DateTime` objects will have a time set to 0:00. ``` now = DateTime.current # => Fri, 10 Jul 2015 18:41:29 +0000 now.beginning_of_quarter # => Wed, 01 Jul 2015 00:00:00 +0000 ``` Also aliased as: [at\_beginning\_of\_quarter](calculations#method-i-at_beginning_of_quarter) beginning\_of\_week(start\_day = Date.beginning\_of\_week) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 257 def beginning_of_week(start_day = Date.beginning_of_week) result = days_ago(days_to_week_start(start_day)) acts_like?(:time) ? result.midnight : result end ``` Returns a new date/time representing the start of this week on the given day. Week is assumed to start on `start_day`, default is `Date.beginning_of_week` or `config.beginning_of_week` when set. `DateTime` objects have their time set to 0:00. Also aliased as: [at\_beginning\_of\_week](calculations#method-i-at_beginning_of_week) beginning\_of\_year() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 169 def beginning_of_year change(month: 1).beginning_of_month end ``` Returns a new date/time at the beginning of the year. ``` today = Date.today # => Fri, 10 Jul 2015 today.beginning_of_year # => Thu, 01 Jan 2015 ``` `DateTime` objects will have a time set to 0:00. ``` now = DateTime.current # => Fri, 10 Jul 2015 18:41:29 +0000 now.beginning_of_year # => Thu, 01 Jan 2015 00:00:00 +0000 ``` Also aliased as: [at\_beginning\_of\_year](calculations#method-i-at_beginning_of_year) days\_ago(days) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 77 def days_ago(days) advance(days: -days) end ``` Returns a new date/time the specified number of days ago. days\_since(days) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 82 def days_since(days) advance(days: days) end ``` Returns a new date/time the specified number of days in the future. days\_to\_week\_start(start\_day = Date.beginning\_of\_week) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 248 def days_to_week_start(start_day = Date.beginning_of_week) start_day_number = DAYS_INTO_WEEK.fetch(start_day) (wday - start_day_number) % 7 end ``` Returns the number of days to the start of the week on the given day. Week is assumed to start on `start_day`, default is `Date.beginning_of_week` or `config.beginning_of_week` when set. end\_of\_month() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 286 def end_of_month last_day = ::Time.days_in_month(month, year) last_hour(days_since(last_day - day)) end ``` Returns a new date/time representing the end of the month. [`DateTime`](../datetime) objects will have a time set to 23:59:59. Also aliased as: [at\_end\_of\_month](calculations#method-i-at_end_of_month) end\_of\_quarter() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 154 def end_of_quarter last_quarter_month = month + (12 - month) % 3 beginning_of_month.change(month: last_quarter_month).end_of_month end ``` Returns a new date/time at the end of the quarter. ``` today = Date.today # => Fri, 10 Jul 2015 today.end_of_quarter # => Wed, 30 Sep 2015 ``` `DateTime` objects will have a time set to 23:59:59. ``` now = DateTime.current # => Fri, 10 Jul 2015 18:41:29 +0000 now.end_of_quarter # => Wed, 30 Sep 2015 23:59:59 +0000 ``` Also aliased as: [at\_end\_of\_quarter](calculations#method-i-at_end_of_quarter) end\_of\_week(start\_day = Date.beginning\_of\_week) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 273 def end_of_week(start_day = Date.beginning_of_week) last_hour(days_since(6 - days_to_week_start(start_day))) end ``` Returns a new date/time representing the end of this week on the given day. Week is assumed to start on `start_day`, default is `Date.beginning_of_week` or `config.beginning_of_week` when set. [`DateTime`](../datetime) objects have their time set to 23:59:59. Also aliased as: [at\_end\_of\_week](calculations#method-i-at_end_of_week) end\_of\_year() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 294 def end_of_year change(month: 12).end_of_month end ``` Returns a new date/time representing the end of the year. [`DateTime`](../datetime) objects will have a time set to 23:59:59. Also aliased as: [at\_end\_of\_year](calculations#method-i-at_end_of_year) future?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 52 def future? self > self.class.current end ``` Returns true if the date/time is in the future. last\_month() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 230 def last_month months_ago(1) end ``` Short-hand for [`months_ago`](calculations#method-i-months_ago)(1). last\_quarter() Alias for: [prev\_quarter](calculations#method-i-prev_quarter) last\_week(start\_day = Date.beginning\_of\_week, same\_time: false) Alias for: [prev\_week](calculations#method-i-prev_week) last\_weekday() Alias for: [prev\_weekday](calculations#method-i-prev_weekday) last\_year() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 241 def last_year years_ago(1) end ``` Short-hand for [`years_ago`](calculations#method-i-years_ago)(1). monday() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 265 def monday beginning_of_week(:monday) end ``` Returns Monday of this week assuming that week starts on Monday. `DateTime` objects have their time set to 0:00. months\_ago(months) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 97 def months_ago(months) advance(months: -months) end ``` Returns a new date/time the specified number of months ago. months\_since(months) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 102 def months_since(months) advance(months: months) end ``` Returns a new date/time the specified number of months in the future. next\_day?() Alias for: [tomorrow?](calculations#method-i-tomorrow-3F) next\_occurring(day\_of\_week) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 330 def next_occurring(day_of_week) from_now = DAYS_INTO_WEEK.fetch(day_of_week) - wday from_now += 7 unless from_now > 0 advance(days: from_now) end ``` Returns a new date/time representing the next occurrence of the specified day of week. ``` today = Date.today # => Thu, 14 Dec 2017 today.next_occurring(:monday) # => Mon, 18 Dec 2017 today.next_occurring(:thursday) # => Thu, 21 Dec 2017 ``` next\_quarter() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 205 def next_quarter months_since(3) end ``` Short-hand for [`months_since`](calculations#method-i-months_since)(3) next\_week(given\_day\_in\_next\_week = Date.beginning\_of\_week, same\_time: false) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 190 def next_week(given_day_in_next_week = Date.beginning_of_week, same_time: false) result = first_hour(weeks_since(1).beginning_of_week.days_since(days_span(given_day_in_next_week))) same_time ? copy_time_to(result) : result end ``` Returns a new date/time representing the given day in the next week. ``` today = Date.today # => Thu, 07 May 2015 today.next_week # => Mon, 11 May 2015 ``` The `given_day_in_next_week` defaults to the beginning of the week which is determined by `Date.beginning_of_week` or `config.beginning_of_week` when set. ``` today = Date.today # => Thu, 07 May 2015 today.next_week(:friday) # => Fri, 15 May 2015 ``` `DateTime` objects have their time set to 0:00 unless `same_time` is true. ``` now = DateTime.current # => Thu, 07 May 2015 13:31:16 +0000 now.next_week # => Mon, 11 May 2015 00:00:00 +0000 ``` next\_weekday() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 196 def next_weekday if next_day.on_weekend? next_week(:monday, same_time: true) else next_day end end ``` Returns a new date/time representing the next weekday. on\_weekday?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 62 def on_weekday? !WEEKEND_DAYS.include?(wday) end ``` Returns true if the date/time does not fall on a Saturday or Sunday. on\_weekend?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 57 def on_weekend? WEEKEND_DAYS.include?(wday) end ``` Returns true if the date/time falls on a Saturday or Sunday. past?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 47 def past? self < self.class.current end ``` Returns true if the date/time is in the past. prev\_day?() Alias for: [yesterday?](calculations#method-i-yesterday-3F) prev\_occurring(day\_of\_week) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 341 def prev_occurring(day_of_week) ago = wday - DAYS_INTO_WEEK.fetch(day_of_week) ago += 7 unless ago > 0 advance(days: -ago) end ``` Returns a new date/time representing the previous occurrence of the specified day of week. ``` today = Date.today # => Thu, 14 Dec 2017 today.prev_occurring(:monday) # => Mon, 11 Dec 2017 today.prev_occurring(:thursday) # => Thu, 07 Dec 2017 ``` prev\_quarter() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 235 def prev_quarter months_ago(3) end ``` Short-hand for [`months_ago`](calculations#method-i-months_ago)(3). Also aliased as: [last\_quarter](calculations#method-i-last_quarter) prev\_week(start\_day = Date.beginning\_of\_week, same\_time: false) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 213 def prev_week(start_day = Date.beginning_of_week, same_time: false) result = first_hour(weeks_ago(1).beginning_of_week.days_since(days_span(start_day))) same_time ? copy_time_to(result) : result end ``` Returns a new date/time representing the given day in the previous week. Week is assumed to start on `start_day`, default is `Date.beginning_of_week` or `config.beginning_of_week` when set. [`DateTime`](../datetime) objects have their time set to 0:00 unless `same_time` is true. Also aliased as: [last\_week](calculations#method-i-last_week) prev\_weekday() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 220 def prev_weekday if prev_day.on_weekend? copy_time_to(beginning_of_week(:friday)) else prev_day end end ``` Returns a new date/time representing the previous weekday. Also aliased as: [last\_weekday](calculations#method-i-last_weekday) sunday() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 280 def sunday end_of_week(:monday) end ``` Returns Sunday of this week assuming that week starts on Monday. `DateTime` objects have their time set to 23:59:59. today?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 30 def today? to_date == ::Date.current end ``` Returns true if the date/time is today. tomorrow() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 25 def tomorrow advance(days: 1) end ``` Returns a new date/time representing tomorrow. tomorrow?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 35 def tomorrow? to_date == ::Date.current.tomorrow end ``` Returns true if the date/time is tomorrow. Also aliased as: [next\_day?](calculations#method-i-next_day-3F) weeks\_ago(weeks) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 87 def weeks_ago(weeks) advance(weeks: -weeks) end ``` Returns a new date/time the specified number of weeks ago. weeks\_since(weeks) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 92 def weeks_since(weeks) advance(weeks: weeks) end ``` Returns a new date/time the specified number of weeks in the future. years\_ago(years) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 107 def years_ago(years) advance(years: -years) end ``` Returns a new date/time the specified number of years ago. years\_since(years) Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 112 def years_since(years) advance(years: years) end ``` Returns a new date/time the specified number of years in the future. yesterday() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 20 def yesterday advance(days: -1) end ``` Returns a new date/time representing yesterday. yesterday?() Show source ``` # File activesupport/lib/active_support/core_ext/date_and_time/calculations.rb, line 41 def yesterday? to_date == ::Date.current.yesterday end ``` Returns true if the date/time is yesterday. Also aliased as: [prev\_day?](calculations#method-i-prev_day-3F) rails module Module::Concerning module Module::Concerning ========================== Bite-sized separation of concerns ================================= We often find ourselves with a medium-sized chunk of behavior that we'd like to extract, but only mix in to a single class. Extracting a plain old Ruby object to encapsulate it and collaborate or delegate to the original object is often a good choice, but when there's no additional state to encapsulate or we're making DSL-style declarations about the parent class, introducing new collaborators can obfuscate rather than simplify. The typical route is to just dump everything in a monolithic class, perhaps with a comment, as a least-bad alternative. Using modules in separate files means tedious sifting to get a big-picture view. Dissatisfying ways to separate small concerns ============================================= Using comments: --------------- ``` class Todo < ApplicationRecord # Other todo implementation # ... ## Event tracking has_many :events before_create :track_creation private def track_creation # ... end end ``` With an inline module: ---------------------- Noisy syntax. ``` class Todo < ApplicationRecord # Other todo implementation # ... module EventTracking extend ActiveSupport::Concern included do has_many :events before_create :track_creation end private def track_creation # ... end end include EventTracking end ``` Mix-in noise exiled to its own file: ------------------------------------ Once our chunk of behavior starts pushing the scroll-to-understand-it boundary, we give in and move it to a separate file. At this size, the increased overhead can be a reasonable tradeoff even if it reduces our at-a-glance perception of how things work. ``` class Todo < ApplicationRecord # Other todo implementation # ... include TodoEventTracking end ``` Introducing [`Module#concerning`](concerning#method-i-concerning) ================================================================= By quieting the mix-in noise, we arrive at a natural, low-ceremony way to separate bite-sized concerns. ``` class Todo < ApplicationRecord # Other todo implementation # ... concerning :EventTracking do included do has_many :events before_create :track_creation end private def track_creation # ... end end end Todo.ancestors # => [Todo, Todo::EventTracking, ApplicationRecord, Object] ``` This small step has some wonderful ripple effects. We can * grok the behavior of our class in one glance, * clean up monolithic junk-drawer classes by separating their concerns, and * stop leaning on protected/private for crude “this is internal stuff” modularity. ### Prepending concerning `concerning` supports a `prepend: true` argument which will `prepend` the concern instead of using `include` for it. concern(topic, &module\_definition) Show source ``` # File activesupport/lib/active_support/core_ext/module/concerning.rb, line 132 def concern(topic, &module_definition) const_set topic, Module.new { extend ::ActiveSupport::Concern module_eval(&module_definition) } end ``` A low-cruft shortcut to define a concern. ``` concern :EventTracking do ... end ``` is equivalent to ``` module EventTracking extend ActiveSupport::Concern ... end ``` concerning(topic, prepend: false, &block) Show source ``` # File activesupport/lib/active_support/core_ext/module/concerning.rb, line 114 def concerning(topic, prepend: false, &block) method = prepend ? :prepend : :include __send__(method, concern(topic, &block)) end ``` Define a new concern and mix it in.
programming_docs
rails class Module::DelegationError class Module::DelegationError ============================== Parent: NoMethodError Error generated by `delegate` when a method is called on `nil` and `allow_nil` option is not used. rails module Digest::UUID module Digest::UUID ==================== uuid\_from\_hash(hash\_class, namespace, name) Show source ``` # File activesupport/lib/active_support/core_ext/digest/uuid.rb, line 21 def self.uuid_from_hash(hash_class, namespace, name) if hash_class == Digest::MD5 || hash_class == OpenSSL::Digest::MD5 version = 3 elsif hash_class == Digest::SHA1 || hash_class == OpenSSL::Digest::SHA1 version = 5 else raise ArgumentError, "Expected OpenSSL::Digest::SHA1 or OpenSSL::Digest::MD5, got #{hash_class.name}." end uuid_namespace = pack_uuid_namespace(namespace) hash = hash_class.new hash.update(uuid_namespace) hash.update(name) ary = hash.digest.unpack("NnnnnN") ary[2] = (ary[2] & 0x0FFF) | (version << 12) ary[3] = (ary[3] & 0x3FFF) | 0x8000 "%08x-%04x-%04x-%04x-%04x%08x" % ary end ``` Generates a v5 non-random [`UUID`](uuid) (Universally Unique IDentifier). Using OpenSSL::Digest::MD5 generates version 3 UUIDs; OpenSSL::Digest::SHA1 generates version 5 UUIDs. [`uuid_from_hash`](uuid#method-c-uuid_from_hash) always generates the same [`UUID`](uuid) for a given name and namespace combination. See RFC 4122 for details of [`UUID`](uuid) at: [www.ietf.org/rfc/rfc4122.txt](https://www.ietf.org/rfc/rfc4122.txt) uuid\_v3(uuid\_namespace, name) Show source ``` # File activesupport/lib/active_support/core_ext/digest/uuid.rb, line 44 def self.uuid_v3(uuid_namespace, name) uuid_from_hash(OpenSSL::Digest::MD5, uuid_namespace, name) end ``` Convenience method for [`uuid_from_hash`](uuid#method-c-uuid_from_hash) using OpenSSL::Digest::MD5. uuid\_v4() Show source ``` # File activesupport/lib/active_support/core_ext/digest/uuid.rb, line 54 def self.uuid_v4 SecureRandom.uuid end ``` Convenience method for SecureRandom.uuid. uuid\_v5(uuid\_namespace, name) Show source ``` # File activesupport/lib/active_support/core_ext/digest/uuid.rb, line 49 def self.uuid_v5(uuid_namespace, name) uuid_from_hash(OpenSSL::Digest::SHA1, uuid_namespace, name) end ``` Convenience method for [`uuid_from_hash`](uuid#method-c-uuid_from_hash) using OpenSSL::Digest::SHA1. rails module ActionCable::TestHelper module ActionCable::TestHelper =============================== Provides helper methods for testing Action Cable broadcasting assert\_broadcast\_on(stream, data, &block) Show source ``` # File actioncable/lib/action_cable/test_helper.rb, line 97 def assert_broadcast_on(stream, data, &block) # Encode to JSON and back–we want to use this value to compare # with decoded JSON. # Comparing JSON strings doesn't work due to the order if the keys. serialized_msg = ActiveSupport::JSON.decode(ActiveSupport::JSON.encode(data)) new_messages = broadcasts(stream) if block_given? old_messages = new_messages clear_messages(stream) _assert_nothing_raised_or_warn("assert_broadcast_on", &block) new_messages = broadcasts(stream) clear_messages(stream) # Restore all sent messages (old_messages + new_messages).each { |m| pubsub_adapter.broadcast(stream, m) } end message = new_messages.find { |msg| ActiveSupport::JSON.decode(msg) == serialized_msg } assert message, "No messages sent with #{data} to #{stream}" end ``` Asserts that the specified message has been sent to the stream. ``` def test_assert_transmitted_message ActionCable.server.broadcast 'messages', text: 'hello' assert_broadcast_on('messages', text: 'hello') end ``` If a block is passed, that block should cause a message with the specified data to be sent. ``` def test_assert_broadcast_on_again assert_broadcast_on('messages', text: 'hello') do ActionCable.server.broadcast 'messages', text: 'hello' end end ``` assert\_broadcasts(stream, number, &block) Show source ``` # File actioncable/lib/action_cable/test_helper.rb, line 45 def assert_broadcasts(stream, number, &block) if block_given? original_count = broadcasts_size(stream) _assert_nothing_raised_or_warn("assert_broadcasts", &block) new_count = broadcasts_size(stream) actual_count = new_count - original_count else actual_count = broadcasts_size(stream) end assert_equal number, actual_count, "#{number} broadcasts to #{stream} expected, but #{actual_count} were sent" end ``` Asserts that the number of broadcasted messages to the stream matches the given number. ``` def test_broadcasts assert_broadcasts 'messages', 0 ActionCable.server.broadcast 'messages', { text: 'hello' } assert_broadcasts 'messages', 1 ActionCable.server.broadcast 'messages', { text: 'world' } assert_broadcasts 'messages', 2 end ``` If a block is passed, that block should cause the specified number of messages to be broadcasted. ``` def test_broadcasts_again assert_broadcasts('messages', 1) do ActionCable.server.broadcast 'messages', { text: 'hello' } end assert_broadcasts('messages', 2) do ActionCable.server.broadcast 'messages', { text: 'hi' } ActionCable.server.broadcast 'messages', { text: 'how are you?' } end end ``` assert\_no\_broadcasts(stream, &block) Show source ``` # File actioncable/lib/action_cable/test_helper.rb, line 78 def assert_no_broadcasts(stream, &block) assert_broadcasts stream, 0, &block end ``` Asserts that no messages have been sent to the stream. ``` def test_no_broadcasts assert_no_broadcasts 'messages' ActionCable.server.broadcast 'messages', { text: 'hi' } assert_broadcasts 'messages', 1 end ``` If a block is passed, that block should not cause any message to be sent. ``` def test_broadcasts_again assert_no_broadcasts 'messages' do # No job messages should be sent from this block end end ``` Note: This assertion is simply a shortcut for: ``` assert_broadcasts 'messages', 0, &block ``` rails class ActionCable::RemoteConnections class ActionCable::RemoteConnections ===================================== Parent: [Object](../object) If you need to disconnect a given connection, you can go through the [`RemoteConnections`](remoteconnections). You can find the connections you're looking for by searching for the identifier declared on the connection. For example: ``` module ApplicationCable class Connection < ActionCable::Connection::Base identified_by :current_user .... end end ActionCable.server.remote_connections.where(current_user: User.find(1)).disconnect ``` This will disconnect all the connections established for `User.find(1)`, across all servers running on all machines, because it uses the internal channel that all of these servers are subscribed to. server[R] new(server) Show source ``` # File actioncable/lib/action_cable/remote_connections.rb, line 25 def initialize(server) @server = server end ``` where(identifier) Show source ``` # File actioncable/lib/action_cable/remote_connections.rb, line 29 def where(identifier) RemoteConnection.new(server, identifier) end ``` rails module ActionCable::Connection::Assertions module ActionCable::Connection::Assertions =========================================== assert\_reject\_connection(&block) Show source ``` # File actioncable/lib/action_cable/connection/test_case.rb, line 25 def assert_reject_connection(&block) assert_raises(Authorization::UnauthorizedError, "Expected to reject connection but no rejection was made", &block) end ``` Asserts that the connection is rejected (via `reject_unauthorized_connection`). ``` # Asserts that connection without user_id fails assert_reject_connection { connect params: { user_id: '' } } ``` rails class ActionCable::Connection::TaggedLoggerProxy class ActionCable::Connection::TaggedLoggerProxy ================================================= Parent: [Object](../../object) Allows the use of per-connection tags against the server logger. This wouldn't work using the traditional `ActiveSupport::TaggedLogging` enhanced [`Rails.logger`](../../rails#attribute-c-logger), as that logger will reset the tags between requests. The connection is long-lived, so it needs its own set of tags for its independent duration. tags[R] new(logger, tags:) Show source ``` # File actioncable/lib/action_cable/connection/tagged_logger_proxy.rb, line 11 def initialize(logger, tags:) @logger = logger @tags = tags.flatten end ``` add\_tags(\*tags) Show source ``` # File actioncable/lib/action_cable/connection/tagged_logger_proxy.rb, line 16 def add_tags(*tags) @tags += tags.flatten @tags = @tags.uniq end ``` tag(logger) { || ... } Show source ``` # File actioncable/lib/action_cable/connection/tagged_logger_proxy.rb, line 21 def tag(logger, &block) if logger.respond_to?(:tagged) current_tags = tags - logger.formatter.current_tags logger.tagged(*current_tags, &block) else yield end end ``` log(type, message) Show source ``` # File actioncable/lib/action_cable/connection/tagged_logger_proxy.rb, line 37 def log(type, message) # :doc: tag(@logger) { @logger.send type, message } end ``` rails class ActionCable::Connection::TestCookieJar class ActionCable::Connection::TestCookieJar ============================================= Parent: [ActiveSupport::HashWithIndifferentAccess](../../activesupport/hashwithindifferentaccess) We don't want to use the whole “encryption stack” for connection unit-tests, but we want to make sure that users test against the correct types of cookies (i.e. signed or encrypted or plain) encrypted() Show source ``` # File actioncable/lib/action_cable/connection/test_case.rb, line 38 def encrypted self[:encrypted] ||= {}.with_indifferent_access end ``` signed() Show source ``` # File actioncable/lib/action_cable/connection/test_case.rb, line 34 def signed self[:signed] ||= {}.with_indifferent_access end ``` rails module ActionCable::Connection::Identification module ActionCable::Connection::Identification =============================================== connection\_identifier() Show source ``` # File actioncable/lib/action_cable/connection/identification.rb, line 27 def connection_identifier unless defined? @connection_identifier @connection_identifier = connection_gid identifiers.filter_map { |id| instance_variable_get("@#{id}") } end @connection_identifier end ``` Return a single connection identifier that combines the value of all the registered identifiers into a single gid. rails class ActionCable::Connection::Base class ActionCable::Connection::Base ==================================== Parent: [Object](../../object) Included modules: [ActionCable::Connection::Identification](identification), [ActionCable::Connection::InternalChannel](internalchannel), [ActionCable::Connection::Authorization](authorization), [ActiveSupport::Rescuable](../../activesupport/rescuable) For every WebSocket connection the Action Cable server accepts, a `Connection` object will be instantiated. This instance becomes the parent of all of the channel subscriptions that are created from there on. Incoming messages are then routed to these channel subscriptions based on an identifier sent by the Action Cable consumer. The `Connection` itself does not deal with any specific application logic beyond authentication and authorization. Here's a basic example: ``` module ApplicationCable class Connection < ActionCable::Connection::Base identified_by :current_user def connect self.current_user = find_verified_user logger.add_tags current_user.name end def disconnect # Any cleanup work needed when the cable connection is cut. end private def find_verified_user User.find_by_identity(cookies.encrypted[:identity_id]) || reject_unauthorized_connection end end end ``` First, we declare that this connection can be identified by its current\_user. This allows us to later be able to find all connections established for that current\_user (and potentially disconnect them). You can declare as many identification indexes as you like. Declaring an identification means that an attr\_accessor is automatically set for that key. Second, we rely on the fact that the WebSocket connection is established with the cookies from the domain being sent along. This makes it easy to use signed cookies that were set when logging in via a web interface to authorize the WebSocket connection. Finally, we add a tag to the connection-specific logger with the name of the current user to easily distinguish their messages in the log. Pretty simple, eh? env[R] logger[R] protocol[R] server[R] subscriptions[R] worker\_pool[R] new(server, env, coder: ActiveSupport::JSON) Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 55 def initialize(server, env, coder: ActiveSupport::JSON) @server, @env, @coder = server, env, coder @worker_pool = server.worker_pool @logger = new_tagged_logger @websocket = ActionCable::Connection::WebSocket.new(env, self, event_loop) @subscriptions = ActionCable::Connection::Subscriptions.new(self) @message_buffer = ActionCable::Connection::MessageBuffer.new(self) @_internal_subscriptions = nil @started_at = Time.now end ``` beat() Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 125 def beat transmit type: ActionCable::INTERNAL[:message_types][:ping], message: Time.now.to_i end ``` close(reason: nil, reconnect: true) Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 100 def close(reason: nil, reconnect: true) transmit( type: ActionCable::INTERNAL[:message_types][:disconnect], reason: reason, reconnect: reconnect ) websocket.close end ``` Close the WebSocket connection. send\_async(method, \*arguments) Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 110 def send_async(method, *arguments) worker_pool.async_invoke(self, method, *arguments) end ``` Invoke a method on the connection asynchronously through the pool of thread workers. statistics() Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 116 def statistics { identifier: connection_identifier, started_at: @started_at, subscriptions: subscriptions.identifiers, request_id: @env["action_dispatch.request_id"] } end ``` Return a basic hash of statistics for the connection keyed with `identifier`, `started_at`, `subscriptions`, and `request_id`. This can be returned by a health check against the connection. cookies() Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 159 def cookies # :doc: request.cookie_jar end ``` The cookies of the request that initiated the WebSocket connection. Useful for performing authorization checks. request() Show source ``` # File actioncable/lib/action_cable/connection/base.rb, line 151 def request # :doc: @request ||= begin environment = Rails.application.env_config.merge(env) if defined?(Rails.application) && Rails.application ActionDispatch::Request.new(environment || env) end end ``` The request that initiated the WebSocket connection is available here. This gives access to the environment, cookies, etc. rails module ActionCable::Connection::Authorization module ActionCable::Connection::Authorization ============================================== reject\_unauthorized\_connection() Show source ``` # File actioncable/lib/action_cable/connection/authorization.rb, line 9 def reject_unauthorized_connection logger.error "An unauthorized connection attempt was rejected" raise UnauthorizedError end ``` Closes the WebSocket connection if it is open and returns a 404 “File not Found” response. rails class ActionCable::Connection::TestCase class ActionCable::Connection::TestCase ======================================== Parent: [ActiveSupport::TestCase](../../activesupport/testcase) Included modules: [ActionCable::Connection::TestCase::Behavior](testcase/behavior) Unit test Action Cable connections. Useful to check whether a connection's `identified_by` gets assigned properly and that any improper connection requests are rejected. Basic example ------------- Unit tests are written as follows: 1. Simulate a connection attempt by calling `connect`. 2. Assert state, e.g. identifiers, has been assigned. ``` class ApplicationCable::ConnectionTest < ActionCable::Connection::TestCase def test_connects_with_proper_cookie # Simulate the connection request with a cookie. cookies["user_id"] = users(:john).id connect # Assert the connection identifier matches the fixture. assert_equal users(:john).id, connection.user.id end def test_rejects_connection_without_proper_cookie assert_reject_connection { connect } end end ``` `connect` accepts additional information about the HTTP request with the `params`, `headers`, `session` and Rack `env` options. ``` def test_connect_with_headers_and_query_string connect params: { user_id: 1 }, headers: { "X-API-TOKEN" => "secret-my" } assert_equal "1", connection.user.id assert_equal "secret-my", connection.token end def test_connect_with_params connect params: { user_id: 1 } assert_equal "1", connection.user.id end ``` You can also set up the correct cookies before the connection request: ``` def test_connect_with_cookies # Plain cookies: cookies["user_id"] = 1 # Or signed/encrypted: # cookies.signed["user_id"] = 1 # cookies.encrypted["user_id"] = 1 connect assert_equal "1", connection.user_id end ``` is automatically inferred -------------------------- [`ActionCable::Connection::TestCase`](testcase) will automatically infer the connection under test from the test class name. If the channel cannot be inferred from the test class name, you can explicitly set it with `tests`. ``` class ConnectionTest < ActionCable::Connection::TestCase tests ApplicationCable::Connection end ``` rails module ActionCable::Connection::InternalChannel module ActionCable::Connection::InternalChannel ================================================ Makes it possible for the RemoteConnection to disconnect a specific connection. rails module ActionCable::Connection::TestCase::Behavior module ActionCable::Connection::TestCase::Behavior =================================================== Included modules: [ActiveSupport::Testing::ConstantLookup](../../../activesupport/testing/constantlookup), [ActionCable::Connection::Assertions](../assertions) DEFAULT\_PATH connect(path = ActionCable.server.config.mount\_path, \*\*request\_params) Show source ``` # File actioncable/lib/action_cable/connection/test_case.rb, line 183 def connect(path = ActionCable.server.config.mount_path, **request_params) path ||= DEFAULT_PATH connection = self.class.connection_class.allocate connection.singleton_class.include(TestConnection) connection.send(:initialize, build_test_request(path, **request_params)) connection.connect if connection.respond_to?(:connect) # Only set instance variable if connected successfully @connection = connection end ``` Performs connection attempt to exert [`connect`](behavior#method-i-connect) on the connection under test. Accepts request path as the first argument and the following request options: * params – URL parameters ([`Hash`](../../../hash)) * headers – request headers ([`Hash`](../../../hash)) * session – session data ([`Hash`](../../../hash)) * env – additional Rack env configuration ([`Hash`](../../../hash)) cookies() Show source ``` # File actioncable/lib/action_cable/connection/test_case.rb, line 203 def cookies @cookie_jar ||= TestCookieJar.new end ``` disconnect() Show source ``` # File actioncable/lib/action_cable/connection/test_case.rb, line 196 def disconnect raise "Must be connected!" if connection.nil? connection.disconnect if connection.respond_to?(:disconnect) @connection = nil end ``` Exert [`disconnect`](behavior#method-i-disconnect) on the connection under test.
programming_docs
rails module ActionCable::Connection::Identification::ClassMethods module ActionCable::Connection::Identification::ClassMethods ============================================================= identified\_by(\*identifiers) Show source ``` # File actioncable/lib/action_cable/connection/identification.rb, line 20 def identified_by(*identifiers) Array(identifiers).each { |identifier| attr_accessor identifier } self.identifiers += identifiers end ``` Mark a key as being a connection identifier index that can then be used to find the specific connection again later. Common identifiers are current\_user and current\_account, but could be anything, really. Note that anything marked as an identifier will automatically create a delegate by the same name on any channel instances created off the connection. rails class ActionCable::Server::Base class ActionCable::Server::Base ================================ Parent: [Object](../../object) Included modules: [ActionCable::Server::Broadcasting](broadcasting) A singleton `ActionCable::Server` instance is available via ActionCable.server. It's used by the Rack process that starts the Action Cable server, but is also used by the user to reach the [`RemoteConnections`](../remoteconnections) object, which is used for finding and disconnecting connections across all servers. Also, this is the server instance used for broadcasting. See [`Broadcasting`](broadcasting) for more information. config[R] mutex[R] logger() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 19 def self.logger; config.logger; end ``` new(config: self.class.config) Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 24 def initialize(config: self.class.config) @config = config @mutex = Monitor.new @remote_connections = @event_loop = @worker_pool = @pubsub = nil end ``` call(env) Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 31 def call(env) setup_heartbeat_timer config.connection_class.call.new(self, env).process end ``` Called by Rack to set up the server. connection\_identifiers() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 87 def connection_identifiers config.connection_class.call.identifiers end ``` All of the identifiers applied to the connection class associated with this server. disconnect(identifiers) Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 37 def disconnect(identifiers) remote_connections.where(identifiers).disconnect end ``` Disconnect all the connections identified by `identifiers` on this server or any others via [`RemoteConnections`](../remoteconnections). event\_loop() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 62 def event_loop @event_loop || @mutex.synchronize { @event_loop ||= ActionCable::Connection::StreamEventLoop.new } end ``` pubsub() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 82 def pubsub @pubsub || @mutex.synchronize { @pubsub ||= config.pubsub_adapter.new(self) } end ``` Adapter used for all streams/broadcasting. remote\_connections() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 58 def remote_connections @remote_connections || @mutex.synchronize { @remote_connections ||= RemoteConnections.new(self) } end ``` Gateway to [`RemoteConnections`](../remoteconnections). See that class for details. restart() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 41 def restart connections.each do |connection| connection.close(reason: ActionCable::INTERNAL[:disconnect_reasons][:server_restart]) end @mutex.synchronize do # Shutdown the worker pool @worker_pool.halt if @worker_pool @worker_pool = nil # Shutdown the pub/sub adapter @pubsub.shutdown if @pubsub @pubsub = nil end end ``` worker\_pool() Show source ``` # File actioncable/lib/action_cable/server/base.rb, line 77 def worker_pool @worker_pool || @mutex.synchronize { @worker_pool ||= ActionCable::Server::Worker.new(max_size: config.worker_pool_size) } end ``` The worker pool is where we run connection callbacks and channel actions. We do as little as possible on the server's main thread. The worker pool is an executor service that's backed by a pool of threads working from a task queue. The thread pool size maxes out at 4 worker threads by default. Tune the size yourself with `config.action_cable.worker_pool_size`. Using Active Record, Redis, etc within your channel actions means you'll get a separate connection from each thread in the worker pool. Plan your deployment accordingly: 5 servers each running 5 Puma workers each running an 8-thread worker pool means at least 200 database connections. Also, ensure that your database connection pool size is as least as large as your worker pool size. Otherwise, workers may oversubscribe the database connection pool and block while they wait for other workers to release their connections. Use a smaller worker pool or a larger database connection pool instead. rails module ActionCable::Server::Broadcasting module ActionCable::Server::Broadcasting ========================================= [`Broadcasting`](broadcasting) is how other parts of your application can send messages to a channel's subscribers. As explained in `Channel`, most of the time, these broadcastings are streamed directly to the clients subscribed to the named broadcasting. Let's explain with a full-stack example: ``` class WebNotificationsChannel < ApplicationCable::Channel def subscribed stream_from "web_notifications_#{current_user.id}" end end # Somewhere in your app this is called, perhaps from a NewCommentJob: ActionCable.server.broadcast \ "web_notifications_1", { title: "New things!", body: "All that's fit for print" } # Client-side CoffeeScript, which assumes you've already requested the right to send web notifications: App.cable.subscriptions.create "WebNotificationsChannel", received: (data) -> new Notification data['title'], body: data['body'] ``` broadcast(broadcasting, message, coder: ActiveSupport::JSON) Show source ``` # File actioncable/lib/action_cable/server/broadcasting.rb, line 24 def broadcast(broadcasting, message, coder: ActiveSupport::JSON) broadcaster_for(broadcasting, coder: coder).broadcast(message) end ``` Broadcast a hash directly to a named `broadcasting`. This will later be JSON encoded. broadcaster\_for(broadcasting, coder: ActiveSupport::JSON) Show source ``` # File actioncable/lib/action_cable/server/broadcasting.rb, line 30 def broadcaster_for(broadcasting, coder: ActiveSupport::JSON) Broadcaster.new(self, String(broadcasting), coder: coder) end ``` Returns a broadcaster for a named `broadcasting` that can be reused. Useful when you have an object that may need multiple spots to transmit to a specific broadcasting over and over. rails class ActionCable::Server::Configuration class ActionCable::Server::Configuration ========================================= Parent: [Object](../../object) An instance of this configuration object is available via ActionCable.server.config, which allows you to tweak Action Cable configuration in a Rails config initializer. allow\_same\_origin\_as\_host[RW] allowed\_request\_origins[RW] cable[RW] connection\_class[RW] disable\_request\_forgery\_protection[RW] log\_tags[RW] logger[RW] mount\_path[RW] precompile\_assets[RW] url[RW] worker\_pool\_size[RW] new() Show source ``` # File actioncable/lib/action_cable/server/configuration.rb, line 14 def initialize @log_tags = [] @connection_class = -> { ActionCable::Connection::Base } @worker_pool_size = 4 @disable_request_forgery_protection = false @allow_same_origin_as_host = true end ``` pubsub\_adapter() Show source ``` # File actioncable/lib/action_cable/server/configuration.rb, line 27 def pubsub_adapter adapter = (cable.fetch("adapter") { "redis" }) # Require the adapter itself and give useful feedback about # 1. Missing adapter gems and # 2. Adapter gems' missing dependencies. path_to_adapter = "action_cable/subscription_adapter/#{adapter}" begin require path_to_adapter rescue LoadError => e # We couldn't require the adapter itself. Raise an exception that # points out config typos and missing gems. if e.path == path_to_adapter # We can assume that a non-builtin adapter was specified, so it's # either misspelled or missing from Gemfile. raise e.class, "Could not load the '#{adapter}' Action Cable pubsub adapter. Ensure that the adapter is spelled correctly in config/cable.yml and that you've added the necessary adapter gem to your Gemfile.", e.backtrace # Bubbled up from the adapter require. Prefix the exception message # with some guidance about how to address it and reraise. else raise e.class, "Error loading the '#{adapter}' Action Cable pubsub adapter. Missing a gem it depends on? #{e.message}", e.backtrace end end adapter = adapter.camelize adapter = "PostgreSQL" if adapter == "Postgresql" "ActionCable::SubscriptionAdapter::#{adapter}".constantize end ``` Returns constant of subscription adapter specified in config/cable.yml. If the adapter cannot be found, this will default to the Redis adapter. Also makes sure proper dependencies are required. rails class ActionCable::Channel::Base class ActionCable::Channel::Base ================================= Parent: [Object](../../object) Included modules: ActionCable::Channel::Callbacks, ActionCable::Channel::PeriodicTimers, [ActionCable::Channel::Streams](streams), ActionCable::Channel::Naming, ActionCable::Channel::Broadcasting, [ActiveSupport::Rescuable](../../activesupport/rescuable) The channel provides the basic structure of grouping behavior into logical units when communicating over the WebSocket connection. You can think of a channel like a form of controller, but one that's capable of pushing content to the subscriber in addition to simply responding to the subscriber's direct requests. `Channel` instances are long-lived. A channel object will be instantiated when the cable consumer becomes a subscriber, and then lives until the consumer disconnects. This may be seconds, minutes, hours, or even days. That means you have to take special care not to do anything silly in a channel that would balloon its memory footprint or whatever. The references are forever, so they won't be released as is normally the case with a controller instance that gets thrown away after every request. Long-lived channels (and connections) also mean you're responsible for ensuring that the data is fresh. If you hold a reference to a user record, but the name is changed while that reference is held, you may be sending stale data if you don't take precautions to avoid it. The upside of long-lived channel instances is that you can use instance variables to keep reference to objects that future subscriber requests can interact with. Here's a quick example: ``` class ChatChannel < ApplicationCable::Channel def subscribed @room = Chat::Room[params[:room_number]] end def speak(data) @room.speak data, user: current_user end end ``` The speak action simply uses the Chat::Room object that was created when the channel was first subscribed to by the consumer when that subscriber wants to say something in the room. Action processing ----------------- Unlike subclasses of [`ActionController::Base`](../../actioncontroller/base), channels do not follow a RESTful constraint form for their actions. Instead, Action Cable operates through a remote-procedure call model. You can declare any public method on the channel (optionally taking a `data` argument), and this method is automatically exposed as callable to the client. Example: ``` class AppearanceChannel < ApplicationCable::Channel def subscribed @connection_token = generate_connection_token end def unsubscribed current_user.disappear @connection_token end def appear(data) current_user.appear @connection_token, on: data['appearing_on'] end def away current_user.away @connection_token end private def generate_connection_token SecureRandom.hex(36) end end ``` In this example, the subscribed and unsubscribed methods are not callable methods, as they were already declared in [`ActionCable::Channel::Base`](base), but `#appear` and `#away` are. `#generate_connection_token` is also not callable, since it's a private method. You'll see that appear accepts a data parameter, which it then uses as part of its model call. `#away` does not, since it's simply a trigger action. Also note that in this example, `current_user` is available because it was marked as an identifying attribute on the connection. All such identifiers will automatically create a delegation method of the same name on the channel instance. Rejecting subscription requests ------------------------------- A channel can reject a subscription request in the [`subscribed`](base#method-i-subscribed) callback by invoking the [`reject`](base#method-i-reject) method: ``` class ChatChannel < ApplicationCable::Channel def subscribed @room = Chat::Room[params[:room_number]] reject unless current_user.can_access?(@room) end end ``` In this example, the subscription will be rejected if the `current_user` does not have access to the chat room. On the client-side, the `Channel#rejected` callback will get invoked when the server rejects the subscription request. connection[R] identifier[R] params[R] action\_methods() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 117 def action_methods @action_methods ||= begin # All public instance methods of this class, including ancestors methods = (public_instance_methods(true) - # Except for public instance methods of Base and its ancestors ActionCable::Channel::Base.public_instance_methods(true) + # Be sure to include shadowed public instance methods of this class public_instance_methods(false)).uniq.map(&:to_s) methods.to_set end end ``` A list of method names that should be considered actions. This includes all public instance methods on a channel, less any internal methods (defined on [`Base`](base)), adding back in any methods that are internal, but still exist on the class itself. #### Returns * `Set` - A set of all methods that should be considered actions. new(connection, identifier, params = {}) Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 144 def initialize(connection, identifier, params = {}) @connection = connection @identifier = identifier @params = params # When a channel is streaming via pubsub, we want to delay the confirmation # transmission until pubsub subscription is confirmed. # # The counter starts at 1 because it's awaiting a call to #subscribe_to_channel @defer_subscription_confirmation_counter = Concurrent::AtomicFixnum.new(1) @reject_subscription = nil @subscription_confirmation_sent = nil delegate_connection_identifiers end ``` clear\_action\_methods!() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 133 def clear_action_methods! # :doc: @action_methods = nil end ``` [`action_methods`](base#method-c-action_methods) are cached and there is sometimes need to refresh them. [`::clear_action_methods!`](base#method-c-clear_action_methods-21) allows you to do that, so next time you run [`action_methods`](base#method-c-action_methods), they will be recalculated. method\_added(name) Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 138 def method_added(name) # :doc: super clear_action_methods! end ``` Refresh the cached [`action_methods`](base#method-c-action_methods) when a new action\_method is added. Calls superclass method perform\_action(data) Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 164 def perform_action(data) action = extract_action(data) if processable_action?(action) payload = { channel_class: self.class.name, action: action, data: data } ActiveSupport::Notifications.instrument("perform_action.action_cable", payload) do dispatch_action(action, data) end else logger.error "Unable to process #{action_signature(action, data)}" end end ``` Extract the action name from the passed data and process it via the channel. The process will ensure that the action requested is a public method on the channel declared by the user (so not one of the callbacks like [`subscribed`](base#method-i-subscribed)). subscribe\_to\_channel() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 179 def subscribe_to_channel run_callbacks :subscribe do subscribed end reject_subscription if subscription_rejected? ensure_confirmation_sent end ``` This method is called after subscription has been added to the connection and confirms or rejects the subscription. defer\_subscription\_confirmation!() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 228 def defer_subscription_confirmation! # :doc: @defer_subscription_confirmation_counter.increment end ``` defer\_subscription\_confirmation?() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 232 def defer_subscription_confirmation? # :doc: @defer_subscription_confirmation_counter.value > 0 end ``` ensure\_confirmation\_sent() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 222 def ensure_confirmation_sent # :doc: return if subscription_rejected? @defer_subscription_confirmation_counter.decrement transmit_subscription_confirmation unless defer_subscription_confirmation? end ``` reject() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 240 def reject # :doc: @reject_subscription = true end ``` subscribed() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 199 def subscribed # :doc: # Override in subclasses end ``` Called once a consumer has become a subscriber of the channel. Usually the place to set up any streams you want this channel to be sending to the subscriber. subscription\_confirmation\_sent?() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 236 def subscription_confirmation_sent? # :doc: @subscription_confirmation_sent end ``` subscription\_rejected?() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 244 def subscription_rejected? # :doc: @reject_subscription end ``` transmit(data, via: nil) Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 211 def transmit(data, via: nil) # :doc: status = "#{self.class.name} transmitting #{data.inspect.truncate(300)}" status += " (via #{via})" if via logger.debug(status) payload = { channel_class: self.class.name, data: data, via: via } ActiveSupport::Notifications.instrument("transmit.action_cable", payload) do connection.transmit identifier: @identifier, message: data end end ``` Transmit a hash of data to the subscriber. The hash will automatically be wrapped in a JSON envelope with the proper channel identifier marked as the recipient. unsubscribed() Show source ``` # File actioncable/lib/action_cable/channel/base.rb, line 205 def unsubscribed # :doc: # Override in subclasses end ``` Called once a consumer has cut its cable connection. Can be used for cleaning up connections or marking users as offline or the like.
programming_docs
rails module ActionCable::Channel::ChannelStub module ActionCable::Channel::ChannelStub ========================================= Stub `stream_from` to track streams for the channel. Add public aliases for `subscription_confirmation_sent?` and `subscription_rejected?`. confirmed?() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 22 def confirmed? subscription_confirmation_sent? end ``` rejected?() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 26 def rejected? subscription_rejected? end ``` start\_periodic\_timers() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 43 def start_periodic_timers; end ``` Make periodic timers no-op Also aliased as: [stop\_periodic\_timers](channelstub#method-i-stop_periodic_timers) stop\_all\_streams() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 34 def stop_all_streams @_streams = [] end ``` stop\_periodic\_timers() Alias for: [start\_periodic\_timers](channelstub#method-i-start_periodic_timers) stream\_from(broadcasting, \*) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 30 def stream_from(broadcasting, *) streams << broadcasting end ``` streams() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 38 def streams @_streams ||= [] end ``` rails module ActionCable::Channel::Streams module ActionCable::Channel::Streams ===================================== [`Streams`](streams) allow channels to route broadcastings to the subscriber. A broadcasting is, as discussed elsewhere, a pubsub queue where any data placed into it is automatically sent to the clients that are connected at that time. It's purely an online queue, though. If you're not streaming a broadcasting at the very moment it sends out an update, you will not get that update, even if you connect after it has been sent. Most commonly, the streamed broadcast is sent straight to the subscriber on the client-side. The channel just acts as a connector between the two parties (the broadcaster and the channel subscriber). Here's an example of a channel that allows subscribers to get all new comments on a given page: ``` class CommentsChannel < ApplicationCable::Channel def follow(data) stream_from "comments_for_#{data['recording_id']}" end def unfollow stop_all_streams end end ``` Based on the above example, the subscribers of this channel will get whatever data is put into the, let's say, `comments_for_45` broadcasting as soon as it's put there. An example broadcasting for this channel looks like so: ``` ActionCable.server.broadcast "comments_for_45", { author: 'DHH', content: 'Rails is just swell' } ``` If you have a stream that is related to a model, then the broadcasting used can be generated from the model and channel. The following example would subscribe to a broadcasting like `comments:Z2lkOi8vVGVzdEFwcC9Qb3N0LzE`. ``` class CommentsChannel < ApplicationCable::Channel def subscribed post = Post.find(params[:id]) stream_for post end end ``` You can then broadcast to this channel using: ``` CommentsChannel.broadcast_to(@post, @comment) ``` If you don't just want to parlay the broadcast unfiltered to the subscriber, you can also supply a callback that lets you alter what is sent out. The below example shows how you can use this to provide performance introspection in the process: ``` class ChatChannel < ApplicationCable::Channel def subscribed @room = Chat::Room[params[:room_number]] stream_for @room, coder: ActiveSupport::JSON do |message| if message['originated_at'].present? elapsed_time = (Time.now.to_f - message['originated_at']).round(2) ActiveSupport::Notifications.instrument :performance, measurement: 'Chat.message_delay', value: elapsed_time, action: :timing logger.info "Message took #{elapsed_time}s to arrive" end transmit message end end end ``` You can stop streaming from all broadcasts by calling [`stop_all_streams`](streams#method-i-stop_all_streams). stop\_all\_streams() Show source ``` # File actioncable/lib/action_cable/channel/streams.rb, line 120 def stop_all_streams streams.each do |broadcasting, callback| pubsub.unsubscribe broadcasting, callback logger.info "#{self.class.name} stopped streaming from #{broadcasting}" end.clear end ``` Unsubscribes all streams associated with this channel from the pubsub queue. stop\_stream\_for(model) Show source ``` # File actioncable/lib/action_cable/channel/streams.rb, line 115 def stop_stream_for(model) stop_stream_from(broadcasting_for(model)) end ``` Unsubscribes streams for the `model`. stop\_stream\_from(broadcasting) Show source ``` # File actioncable/lib/action_cable/channel/streams.rb, line 106 def stop_stream_from(broadcasting) callback = streams.delete(broadcasting) if callback pubsub.unsubscribe(broadcasting, callback) logger.info "#{self.class.name} stopped streaming from #{broadcasting}" end end ``` Unsubscribes streams from the named `broadcasting`. stream\_for(model, callback = nil, coder: nil, &block) Show source ``` # File actioncable/lib/action_cable/channel/streams.rb, line 101 def stream_for(model, callback = nil, coder: nil, &block) stream_from(broadcasting_for(model), callback || block, coder: coder) end ``` Start streaming the pubsub queue for the `model` in this channel. Optionally, you can pass a `callback` that'll be used instead of the default of just transmitting the updates straight to the subscriber. Pass `coder: ActiveSupport::JSON` to decode messages as JSON before passing to the callback. Defaults to `coder: nil` which does no decoding, passes raw messages. stream\_from(broadcasting, callback = nil, coder: nil, &block) Show source ``` # File actioncable/lib/action_cable/channel/streams.rb, line 76 def stream_from(broadcasting, callback = nil, coder: nil, &block) broadcasting = String(broadcasting) # Don't send the confirmation until pubsub#subscribe is successful defer_subscription_confirmation! # Build a stream handler by wrapping the user-provided callback with # a decoder or defaulting to a JSON-decoding retransmitter. handler = worker_pool_stream_handler(broadcasting, callback || block, coder: coder) streams[broadcasting] = handler connection.server.event_loop.post do pubsub.subscribe(broadcasting, handler, lambda do ensure_confirmation_sent logger.info "#{self.class.name} is streaming from #{broadcasting}" end) end end ``` Start streaming from the named `broadcasting` pubsub queue. Optionally, you can pass a `callback` that'll be used instead of the default of just transmitting the updates straight to the subscriber. Pass `coder: ActiveSupport::JSON` to decode messages as JSON before passing to the callback. Defaults to `coder: nil` which does no decoding, passes raw messages. stream\_or\_reject\_for(model) Show source ``` # File actioncable/lib/action_cable/channel/streams.rb, line 129 def stream_or_reject_for(model) if model stream_for model else reject end end ``` Calls [`stream_for`](streams#method-i-stream_for) with the given `model` if it's present to start streaming, otherwise rejects the subscription. rails class ActionCable::Channel::TestCase class ActionCable::Channel::TestCase ===================================== Parent: [ActiveSupport::TestCase](../../activesupport/testcase) Included modules: [ActionCable::Channel::TestCase::Behavior](testcase/behavior) Superclass for Action Cable channel functional tests. Basic example ------------- Functional tests are written as follows: 1. First, one uses the `subscribe` method to simulate subscription creation. 2. Then, one asserts whether the current state is as expected. “State” can be anything: transmitted messages, subscribed streams, etc. For example: ``` class ChatChannelTest < ActionCable::Channel::TestCase def test_subscribed_with_room_number # Simulate a subscription creation subscribe room_number: 1 # Asserts that the subscription was successfully created assert subscription.confirmed? # Asserts that the channel subscribes connection to a stream assert_has_stream "chat_1" # Asserts that the channel subscribes connection to a specific # stream created for a model assert_has_stream_for Room.find(1) end def test_does_not_stream_with_incorrect_room_number subscribe room_number: -1 # Asserts that not streams was started assert_no_streams end def test_does_not_subscribe_without_room_number subscribe # Asserts that the subscription was rejected assert subscription.rejected? end end ``` You can also perform actions: ``` def test_perform_speak subscribe room_number: 1 perform :speak, message: "Hello, Rails!" assert_equal "Hello, Rails!", transmissions.last["text"] end ``` Special methods --------------- [`ActionCable::Channel::TestCase`](testcase) will also automatically provide the following instance methods for use in the tests: **connection** An `ActionCable::Channel::ConnectionStub`, representing the current HTTP connection. **subscription** An instance of the current channel, created when you call `subscribe`. **transmissions** A list of all messages that have been transmitted into the channel. is automatically inferred -------------------------- [`ActionCable::Channel::TestCase`](testcase) will automatically infer the channel under test from the test class name. If the channel cannot be inferred from the test class name, you can explicitly set it with `tests`. ``` class SpecialEdgeCaseChannelTest < ActionCable::Channel::TestCase tests SpecialChannel end ``` Specifying connection identifiers --------------------------------- You need to set up your connection manually to provide values for the identifiers. To do this just use: ``` stub_connection(user: users(:john)) ``` Testing broadcasting -------------------- [`ActionCable::Channel::TestCase`](testcase) enhances [`ActionCable::TestHelper`](../testhelper) assertions (e.g. `assert_broadcasts`) to handle broadcasting to models: ``` # in your channel def speak(data) broadcast_to room, text: data["message"] end def test_speak subscribe room_id: rooms(:chat).id assert_broadcast_on(rooms(:chat), text: "Hello, Rails!") do perform :speak, message: "Hello, Rails!" end end ``` rails module ActionCable::Channel::TestCase::Behavior module ActionCable::Channel::TestCase::Behavior ================================================ Included modules: [ActiveSupport::Testing::ConstantLookup](../../../activesupport/testing/constantlookup), [ActionCable::TestHelper](../../testhelper) CHANNEL\_IDENTIFIER assert\_broadcast\_on(stream\_or\_object, \*args) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 273 def assert_broadcast_on(stream_or_object, *args) super(broadcasting_for(stream_or_object), *args) end ``` Calls superclass method [`ActionCable::TestHelper#assert_broadcast_on`](../../testhelper#method-i-assert_broadcast_on) assert\_broadcasts(stream\_or\_object, \*args) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 269 def assert_broadcasts(stream_or_object, *args) super(broadcasting_for(stream_or_object), *args) end ``` Enhance [`TestHelper`](../../testhelper) assertions to handle non-String broadcastings Calls superclass method [`ActionCable::TestHelper#assert_broadcasts`](../../testhelper#method-i-assert_broadcasts) assert\_has\_stream(stream) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 295 def assert_has_stream(stream) assert subscription.streams.include?(stream), "Stream #{stream} has not been started" end ``` Asserts that the specified stream has been started. ``` def test_assert_started_stream subscribe assert_has_stream 'messages' end ``` assert\_has\_stream\_for(object) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 306 def assert_has_stream_for(object) assert_has_stream(broadcasting_for(object)) end ``` Asserts that the specified stream for a model has started. ``` def test_assert_started_stream_for subscribe id: 42 assert_has_stream_for User.find(42) end ``` assert\_no\_streams() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 284 def assert_no_streams assert subscription.streams.empty?, "No streams started was expected, but #{subscription.streams.count} found" end ``` Asserts that no streams have been started. ``` def test_assert_no_started_stream subscribe assert_no_streams end ``` perform(action, data = {}) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 256 def perform(action, data = {}) check_subscribed! subscription.perform_action(data.stringify_keys.merge("action" => action.to_s)) end ``` Perform action on a channel. NOTE: Must be subscribed. stub\_connection(identifiers = {}) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 234 def stub_connection(identifiers = {}) @connection = ConnectionStub.new(identifiers) end ``` Set up test connection with the specified identifiers: ``` class ApplicationCable < ActionCable::Connection::Base identified_by :user, :token end stub_connection(user: users[:john], token: 'my-secret-token') ``` subscribe(params = {}) Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 239 def subscribe(params = {}) @connection ||= stub_connection @subscription = self.class.channel_class.new(connection, CHANNEL_IDENTIFIER, params.with_indifferent_access) @subscription.singleton_class.include(ChannelStub) @subscription.subscribe_to_channel @subscription end ``` Subscribe to the channel under test. Optionally pass subscription parameters as a [`Hash`](../../../hash). transmissions() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 262 def transmissions # Return only directly sent message (via #transmit) connection.transmissions.filter_map { |data| data["message"] } end ``` Returns messages transmitted into channel unsubscribe() Show source ``` # File actioncable/lib/action_cable/channel/test_case.rb, line 248 def unsubscribe check_subscribed! subscription.unsubscribe_from_channel end ``` Unsubscribe the subscription under test. rails module ActionCable::Channel::Broadcasting::ClassMethods module ActionCable::Channel::Broadcasting::ClassMethods ======================================================== broadcast\_to(model, message) Show source ``` # File actioncable/lib/action_cable/channel/broadcasting.rb, line 14 def broadcast_to(model, message) ActionCable.server.broadcast(broadcasting_for(model), message) end ``` Broadcast a hash to a unique broadcasting for this `model` in this channel. broadcasting\_for(model) Show source ``` # File actioncable/lib/action_cable/channel/broadcasting.rb, line 24 def broadcasting_for(model) serialize_broadcasting([ channel_name, model ]) end ``` Returns a unique broadcasting identifier for this `model` in this channel: ``` CommentsChannel.broadcasting_for("all") # => "comments:all" ``` You can pass any object as a target (e.g. Active Record model), and it would be serialized into a string under the hood. rails module ActionCable::Channel::Naming::ClassMethods module ActionCable::Channel::Naming::ClassMethods ================================================== channel\_name() Show source ``` # File actioncable/lib/action_cable/channel/naming.rb, line 16 def channel_name @channel_name ||= name.delete_suffix("Channel").gsub("::", ":").underscore end ``` Returns the name of the channel, underscored, without the `Channel` ending. If the channel is in a namespace, then the namespaces are represented by single colon separators in the channel name. ``` ChatChannel.channel_name # => 'chat' Chats::AppearancesChannel.channel_name # => 'chats:appearances' FooChats::BarAppearancesChannel.channel_name # => 'foo_chats:bar_appearances' ``` rails module ActionCable::Channel::PeriodicTimers::ClassMethods module ActionCable::Channel::PeriodicTimers::ClassMethods ========================================================== periodically(callback\_or\_method\_name = nil, every:, &block) Show source ``` # File actioncable/lib/action_cable/channel/periodic_timers.rb, line 31 def periodically(callback_or_method_name = nil, every:, &block) callback = if block_given? raise ArgumentError, "Pass a block or provide a callback arg, not both" if callback_or_method_name block else case callback_or_method_name when Proc callback_or_method_name when Symbol -> { __send__ callback_or_method_name } else raise ArgumentError, "Expected a Symbol method name or a Proc, got #{callback_or_method_name.inspect}" end end unless every.kind_of?(Numeric) && every > 0 raise ArgumentError, "Expected every: to be a positive number of seconds, got #{every.inspect}" end self.periodic_timers += [[ callback, every: every ]] end ``` Periodically performs a task on the channel, like updating an online user counter, polling a backend for new status messages, sending regular “heartbeat” messages, or doing some internal work and giving progress updates. Pass a method name or lambda argument or provide a block to call. Specify the calling period in seconds using the `every:` keyword argument. ``` periodically :transmit_progress, every: 5.seconds periodically every: 3.minutes do transmit action: :update_count, count: current_count end ``` rails class ActionCable::RemoteConnections::RemoteConnection class ActionCable::RemoteConnections::RemoteConnection ======================================================= Parent: [Object](../../object) Represents a single remote connection found via `ActionCable.server.remote_connections.where(*)`. Exists solely for the purpose of calling [`disconnect`](remoteconnection#method-i-disconnect) on that connection. server[R] new(server, ids) Show source ``` # File actioncable/lib/action_cable/remote_connections.rb, line 41 def initialize(server, ids) @server = server set_identifier_instance_vars(ids) end ``` disconnect() Show source ``` # File actioncable/lib/action_cable/remote_connections.rb, line 47 def disconnect server.broadcast internal_channel, { type: "disconnect" } end ``` Uses the internal channel to disconnect the connection. rails class ActionCable::SubscriptionAdapter::Test class ActionCable::SubscriptionAdapter::Test ============================================= Parent: ActionCable::SubscriptionAdapter::Async [`Test`](test) adapter for Action Cable ---------------------------------------- The test adapter should be used only in testing. Along with `ActionCable::TestHelper` it makes a great tool to test your Rails application. To use the test adapter set `adapter` value to `test` in your `config/cable.yml` file. NOTE: [`Test`](test) adapter extends the `ActionCable::SubscriptionsAdapter::Async` adapter, so it could be used in system tests too. broadcast(channel, payload) Show source ``` # File actioncable/lib/action_cable/subscription_adapter/test.rb, line 17 def broadcast(channel, payload) broadcasts(channel) << payload super end ``` Calls superclass method broadcasts(channel) Show source ``` # File actioncable/lib/action_cable/subscription_adapter/test.rb, line 22 def broadcasts(channel) channels_data[channel] ||= [] end ``` clear() Show source ``` # File actioncable/lib/action_cable/subscription_adapter/test.rb, line 30 def clear @channels_data = nil end ``` clear\_messages(channel) Show source ``` # File actioncable/lib/action_cable/subscription_adapter/test.rb, line 26 def clear_messages(channel) channels_data[channel] = [] end ```
programming_docs
rails module ActionCable::Helpers::ActionCableHelper module ActionCable::Helpers::ActionCableHelper =============================================== action\_cable\_meta\_tag() Show source ``` # File actioncable/lib/action_cable/helpers/action_cable_helper.rb, line 34 def action_cable_meta_tag tag "meta", name: "action-cable-url", content: ( ActionCable.server.config.url || ActionCable.server.config.mount_path || raise("No Action Cable URL configured -- please configure this at config.action_cable.url") ) end ``` Returns an “action-cable-url” meta tag with the value of the URL specified in your configuration. Ensure this is above your JavaScript tag: ``` <head> <%= action_cable_meta_tag %> <%= javascript_include_tag 'application', 'data-turbo-track' => 'reload' %> </head> ``` This is then used by Action Cable to determine the URL of your WebSocket server. Your JavaScript can then connect to the server without needing to specify the URL directly: ``` import Cable from "@rails/actioncable" window.Cable = Cable window.App = {} App.cable = Cable.createConsumer() ``` Make sure to specify the correct server location in each of your environment config files: ``` config.action_cable.mount_path = "/cable123" <%= action_cable_meta_tag %> would render: => <meta name="action-cable-url" content="/cable123" /> config.action_cable.url = "ws://actioncable.com" <%= action_cable_meta_tag %> would render: => <meta name="action-cable-url" content="ws://actioncable.com" /> ``` rails class Enumerable::SoleItemExpectedError class Enumerable::SoleItemExpectedError ======================================== Parent: StandardError Error generated by `sole` when called on an enumerable that doesn't have exactly one item. rails module AbstractController::Callbacks module AbstractController::Callbacks ===================================== Included modules: [ActiveSupport::Callbacks](../activesupport/callbacks) Abstract Controller [`Callbacks`](callbacks) ============================================ Abstract Controller provides hooks during the life cycle of a controller action. [`Callbacks`](callbacks) allow you to trigger logic during this cycle. Available callbacks are: * `after_action` * `append_after_action` * `append_around_action` * `append_before_action` * `around_action` * `before_action` * `prepend_after_action` * `prepend_around_action` * `prepend_before_action` * `skip_after_action` * `skip_around_action` * `skip_before_action` NOTE: Calling the same callback multiple times will overwrite previous callback definitions. rails module AbstractController::Caching module AbstractController::Caching =================================== Included modules: AbstractController::Caching::ConfigMethods, [AbstractController::Caching::Fragments](caching/fragments) view\_cache\_dependencies() Show source ``` # File actionpack/lib/abstract_controller/caching.rb, line 52 def view_cache_dependencies self.class._view_cache_dependencies.filter_map { |dep| instance_exec(&dep) } end ``` cache(key, options = {}) { || ... } Show source ``` # File actionpack/lib/abstract_controller/caching.rb, line 58 def cache(key, options = {}, &block) # :doc: if cache_configured? cache_store.fetch(ActiveSupport::Cache.expand_cache_key(key, :controller), options, &block) else yield end end ``` Convenience accessor. rails class AbstractController::Base class AbstractController::Base =============================== Parent: [Object](../object) Included modules: [ActiveSupport::Configurable](../activesupport/configurable) [`AbstractController::Base`](base) is a low-level API. Nobody should be using it directly, and subclasses (like [`ActionController::Base`](../actioncontroller/base)) are expected to provide their own `render` method, since rendering means different things depending on the context. abstract[R] abstract?[R] abstract!() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 55 def abstract! @abstract = true end ``` Define a controller as abstract. See [`internal_methods`](base#method-c-internal_methods) for more details. action\_methods() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 89 def action_methods @action_methods ||= begin # All public instance methods of this class, including ancestors methods = (public_instance_methods(true) - # Except for public instance methods of Base and its ancestors internal_methods + # Be sure to include shadowed public instance methods of this class public_instance_methods(false)) methods.map!(&:to_s) methods.to_set end end ``` A list of method names that should be considered actions. This includes all public instance methods on a controller, less any internal methods (see [`internal_methods`](base#method-c-internal_methods)), adding back in any methods that are internal, but still exist on the class itself. #### Returns * `Set` - A set of all methods that should be considered actions. clear\_action\_methods!() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 107 def clear_action_methods! @action_methods = nil end ``` [`action_methods`](base#method-c-action_methods) are cached and there is sometimes a need to refresh them. [`::clear_action_methods!`](base#method-c-clear_action_methods-21) allows you to do that, so next time you run [`action_methods`](base#method-c-action_methods), they will be recalculated. controller\_path() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 121 def controller_path @controller_path ||= name.delete_suffix("Controller").underscore unless anonymous? end ``` Returns the full controller name, underscored, without the ending Controller. ``` class MyApp::MyPostsController < AbstractController::Base end MyApp::MyPostsController.controller_path # => "my_app/my_posts" ``` #### Returns * `String` internal\_methods() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 74 def internal_methods controller = self controller = controller.superclass until controller.abstract? controller.public_instance_methods(true) end ``` A list of all internal methods for a controller. This finds the first abstract superclass of a controller, and gets a list of all public instance methods on that abstract class. Public instance methods of a controller would normally be considered action methods, so methods declared on abstract classes are being removed. (`ActionController::Metal` and [`ActionController::Base`](../actioncontroller/base) are defined as abstract) method\_added(name) Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 126 def method_added(name) super clear_action_methods! end ``` Refresh the cached [`action_methods`](base#method-c-action_methods) when a new action\_method is added. Calls superclass method supports\_path?() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 189 def self.supports_path? true end ``` Returns true if the given controller is capable of rendering a path. A subclass of `AbstractController::Base` may return false. An Email controller for example does not support paths, only full URLs. action\_methods() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 160 def action_methods self.class.action_methods end ``` Delegates to the class' [`::action_methods`](base#method-c-action_methods) action\_name() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 40 attr_internal :action_name ``` Returns the name of the action this controller is processing. available\_action?(action\_name) Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 174 def available_action?(action_name) _find_action_name(action_name) end ``` Returns true if a method for the action is available and can be dispatched, false otherwise. Notice that `action_methods.include?("foo")` may return false and `available_action?("foo")` returns true because this method considers actions that are also available through other means, for example, implicit render ones. #### Parameters * `action_name` - The name of an action to be tested controller\_path() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 155 def controller_path self.class.controller_path end ``` Delegates to the class' [`::controller_path`](base#method-c-controller_path) formats() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 44 attr_internal :formats ``` Returns the formats that can be processed by the controller. performed?() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 181 def performed? response_body end ``` Tests if a response body is set. Used to determine if the `process_action` callback needs to be terminated in `AbstractController::Callbacks`. process(action, \*args) Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 142 def process(action, *args) @_action_name = action.to_s unless action_name = _find_action_name(@_action_name) raise ActionNotFound.new("The action '#{action}' could not be found for #{self.class.name}", self, action) end @_response_body = nil process_action(action_name, *args) end ``` Calls the action going through the entire action dispatch stack. The actual method that is called is determined by calling method\_for\_action. If no method can handle the action, then an [`AbstractController::ActionNotFound`](actionnotfound) error is raised. #### Returns * `self` response\_body() Show source ``` # File actionpack/lib/abstract_controller/base.rb, line 36 attr_internal :response_body ``` Returns the body of the HTTP response sent by the controller. rails module AbstractController::Rendering module AbstractController::Rendering ===================================== Included modules: [ActionView::ViewPaths](../actionview/viewpaths) DEFAULT\_PROTECTED\_INSTANCE\_VARIABLES render(\*args, &block) Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 23 def render(*args, &block) options = _normalize_render(*args, &block) rendered_body = render_to_body(options) if options[:html] _set_html_content_type else _set_rendered_content_type rendered_format end _set_vary_header self.response_body = rendered_body end ``` Normalizes arguments, options and then delegates [`render_to_body`](rendering#method-i-render_to_body) and sticks the result in `self.response_body`. render\_to\_body(options = {}) Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 51 def render_to_body(options = {}) end ``` Performs the actual template rendering. render\_to\_string(\*args, &block) Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 45 def render_to_string(*args, &block) options = _normalize_render(*args, &block) render_to_body(options) end ``` Raw rendering of a template to a string. It is similar to render, except that it does not set the `response_body` and it should be guaranteed to always return a string. If a component extends the semantics of `response_body` (as [`ActionController`](../actioncontroller) extends it to be anything that responds to the method each), this method needs to be overridden in order to still return a string. rendered\_format() Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 55 def rendered_format Mime[:text] end ``` Returns Content-Type of rendered content. view\_assigns() Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 63 def view_assigns variables = instance_variables - _protected_ivars variables.each_with_object({}) do |name, hash| hash[name.slice(1, name.length)] = instance_variable_get(name) end end ``` This method should return a hash with assigns. You can overwrite this configuration per controller. \_normalize\_args(action = nil, options = {}) Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 75 def _normalize_args(action = nil, options = {}) # :doc: if action.respond_to?(:permitted?) if action.permitted? action else raise ArgumentError, "render parameters are not permitted" end elsif action.is_a?(Hash) action else options end end ``` Normalize args by converting `render "foo"` to `render :action => "foo"` and `render "foo/bar"` to `render :file => "foo/bar"`. \_normalize\_options(options) Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 90 def _normalize_options(options) # :doc: options end ``` Normalize options. \_process\_options(options) Show source ``` # File actionpack/lib/abstract_controller/rendering.rb, line 95 def _process_options(options) # :doc: options end ``` Process extra options. rails module AbstractController::Translation module AbstractController::Translation ======================================= l(object, \*\*options) Alias for: [localize](translation#method-i-localize) localize(object, \*\*options) Show source ``` # File actionpack/lib/abstract_controller/translation.rb, line 33 def localize(object, **options) I18n.localize(object, **options) end ``` Delegates to `I18n.localize`. Also aliased as `l`. Also aliased as: [l](translation#method-i-l) t(key, \*\*options) Alias for: [translate](translation#method-i-translate) translate(key, \*\*options) Show source ``` # File actionpack/lib/abstract_controller/translation.rb, line 17 def translate(key, **options) if key&.start_with?(".") path = controller_path.tr("/", ".") defaults = [:"#{path}#{key}"] defaults << options[:default] if options[:default] options[:default] = defaults.flatten key = "#{path}.#{action_name}#{key}" end i18n_raise = options.fetch(:raise, self.raise_on_missing_translations) ActiveSupport::HtmlSafeTranslation.translate(key, **options, raise: i18n_raise) end ``` Delegates to `I18n.translate`. Also aliased as `t`. When the given key starts with a period, it will be scoped by the current controller and action. So if you call `translate(".foo")` from `PeopleController#index`, it will convert the call to `I18n.translate("people.index.foo")`. This makes it less repetitive to translate many keys within the same controller / action and gives you a simple framework for scoping them consistently. Also aliased as: [t](translation#method-i-t) rails class AbstractController::ActionNotFound class AbstractController::ActionNotFound ========================================= Parent: StandardError Raised when a non-existing controller action is triggered. rails module AbstractController::UrlFor module AbstractController::UrlFor ================================== Included modules: [ActionDispatch::Routing::UrlFor](../actiondispatch/routing/urlfor) Includes `url_for` into the host class (e.g. an abstract controller or mailer). The class has to provide a `RouteSet` by implementing the `_routes` methods. Otherwise, an exception will be raised. Note that this module is completely decoupled from HTTP - the only requirement is a valid `_routes` implementation. \_routes() Show source ``` # File actionpack/lib/abstract_controller/url_for.rb, line 14 def _routes raise "In order to use #url_for, you must include routing helpers explicitly. " \ "For instance, `include Rails.application.routes.url_helpers`." end ``` rails module AbstractController::Callbacks::ClassMethods module AbstractController::Callbacks::ClassMethods =================================================== \_insert\_callbacks(callbacks, block = nil) { |callback, options| ... } Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 96 def _insert_callbacks(callbacks, block = nil) options = callbacks.extract_options! _normalize_callback_options(options) callbacks.push(block) if block callbacks.each do |callback| yield callback, options end end ``` Take callback names and an optional callback proc, normalize them, then call the block with each callback. This allows us to abstract the normalization across several methods that use it. #### Parameters * `callbacks` - An array of callbacks, with an optional options hash as the last parameter. * `block` - A proc that should be added to the callbacks. #### Block Parameters * `name` - The callback to be added. * `options` - A hash of options to be used when adding the callback. \_normalize\_callback\_options(options) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 72 def _normalize_callback_options(options) _normalize_callback_option(options, :only, :if) _normalize_callback_option(options, :except, :unless) end ``` If `:only` or `:except` are used, convert the options into the `:if` and `:unless` options of [`ActiveSupport::Callbacks`](../../activesupport/callbacks). The basic idea is that `:only => :index` gets converted to `:if => proc {|c| c.action_name == "index" }`. Note that `:only` has priority over `:if` in case they are used together. ``` only: :index, if: -> { true } # the :if option will be ignored. ``` Note that `:if` has priority over `:except` in case they are used together. ``` except: :index, if: -> { true } # the :except option will be ignored. ``` #### Options * `only` - The callback should be run only for this action. * `except` - The callback should be run for all actions except this action. after\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 146 ``` Append a callback after actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. append\_after\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 167 ``` Append a callback after actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. append\_around\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 195 ``` Append a callback around actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. append\_before\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 135 ``` Append a callback before actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. If the callback renders or redirects, the action will not run. If there are additional callbacks scheduled to run after that callback, they are also cancelled. around\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 174 ``` Append a callback around actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. before\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 106 ``` Append a callback before actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. If the callback renders or redirects, the action will not run. If there are additional callbacks scheduled to run after that callback, they are also cancelled. prepend\_after\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 153 ``` Prepend a callback after actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. prepend\_around\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 181 ``` Prepend a callback around actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. prepend\_before\_action(names, block) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 117 ``` Prepend a callback before actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. If the callback renders or redirects, the action will not run. If there are additional callbacks scheduled to run after that callback, they are also cancelled. skip\_after\_action(names) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 160 ``` Skip a callback after actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. skip\_around\_action(names) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 188 ``` Skip a callback around actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details. skip\_before\_action(names) Show source ``` # File actionpack/lib/abstract_controller/callbacks.rb, line 128 ``` Skip a callback before actions. See [`_insert_callbacks`](classmethods#method-i-_insert_callbacks) for parameter details.
programming_docs
rails module AbstractController::Caching::Fragments module AbstractController::Caching::Fragments ============================================== Fragment caching is used for caching various blocks within views without caching the entire action as a whole. This is useful when certain elements of an action change frequently or depend on complicated state while other parts rarely change or can be shared amongst multiple parties. The caching is done using the `cache` helper available in the Action View. See [`ActionView::Helpers::CacheHelper`](../../actionview/helpers/cachehelper) for more information. While it's strongly recommended that you use key-based cache expiration (see links in CacheHelper for more information), it is also possible to manually expire caches. For example: ``` expire_fragment('name_of_cache') ``` combined\_fragment\_cache\_key(key) Show source ``` # File actionpack/lib/abstract_controller/caching/fragments.rb, line 68 def combined_fragment_cache_key(key) head = self.class.fragment_cache_keys.map { |k| instance_exec(&k) } tail = key.is_a?(Hash) ? url_for(key).split("://").last : key cache_key = [:views, ENV["RAILS_CACHE_ID"] || ENV["RAILS_APP_VERSION"], head, tail] cache_key.flatten!(1) cache_key.compact! cache_key end ``` Given a key (as described in `expire_fragment`), returns a key array suitable for use in reading, writing, or expiring a cached fragment. All keys begin with `:views`, followed by `ENV["RAILS_CACHE_ID"]` or `ENV["RAILS_APP_VERSION"]` if set, followed by any controller-wide key prefix values, ending with the specified `key` value. expire\_fragment(key, options = nil) Show source ``` # File actionpack/lib/abstract_controller/caching/fragments.rb, line 132 def expire_fragment(key, options = nil) return unless cache_configured? key = combined_fragment_cache_key(key) unless key.is_a?(Regexp) instrument_fragment_cache :expire_fragment, key do if key.is_a?(Regexp) cache_store.delete_matched(key, options) else cache_store.delete(key, options) end end end ``` Removes fragments from the cache. `key` can take one of three forms: * [`String`](../../string) - This would normally take the form of a path, like `pages/45/notes`. * [`Hash`](../../hash) - Treated as an implicit call to `url_for`, like `{ controller: 'pages', action: 'notes', id: 45}` * [`Regexp`](../../regexp) - Will remove any fragment that matches, so `%r{pages/\d*/notes}` might remove all notes. Make sure you don't use anchors in the regex (`^` or `$`) because the actual filename matched looks like `./cache/filename/path.cache`. Note: [`Regexp`](../../regexp) expiration is only supported on caches that can iterate over all keys (unlike memcached). `options` is passed through to the cache store's `delete` method (or `delete_matched`, for [`Regexp`](../../regexp) keys). fragment\_exist?(key, options = nil) Show source ``` # File actionpack/lib/abstract_controller/caching/fragments.rb, line 105 def fragment_exist?(key, options = nil) return unless cache_configured? key = combined_fragment_cache_key(key) instrument_fragment_cache :exist_fragment?, key do cache_store.exist?(key, options) end end ``` Check if a cached fragment from the location signified by `key` exists (see `expire_fragment` for acceptable formats). read\_fragment(key, options = nil) Show source ``` # File actionpack/lib/abstract_controller/caching/fragments.rb, line 93 def read_fragment(key, options = nil) return unless cache_configured? key = combined_fragment_cache_key(key) instrument_fragment_cache :read_fragment, key do result = cache_store.read(key, options) result.respond_to?(:html_safe) ? result.html_safe : result end end ``` Reads a cached fragment from the location signified by `key` (see `expire_fragment` for acceptable formats). write\_fragment(key, content, options = nil) Show source ``` # File actionpack/lib/abstract_controller/caching/fragments.rb, line 80 def write_fragment(key, content, options = nil) return content unless cache_configured? key = combined_fragment_cache_key(key) instrument_fragment_cache :write_fragment, key do content = content.to_str cache_store.write(key, content, options) end content end ``` Writes `content` to the location signified by `key` (see `expire_fragment` for acceptable formats). rails module AbstractController::Caching::Fragments::ClassMethods module AbstractController::Caching::Fragments::ClassMethods ============================================================ fragment\_cache\_key(value = nil, &key) Show source ``` # File actionpack/lib/abstract_controller/caching/fragments.rb, line 57 def fragment_cache_key(value = nil, &key) self.fragment_cache_keys += [key || -> { value }] end ``` Allows you to specify controller-wide key prefixes for cache fragments. Pass either a constant `value`, or a block which computes a value each time a cache key is generated. For example, you may want to prefix all fragment cache keys with a global version identifier, so you can easily invalidate all caches. ``` class ApplicationController fragment_cache_key "v1" end ``` When it's time to invalidate all fragments, simply change the string constant. Or, progressively roll out the cache invalidation using a computed value: ``` class ApplicationController fragment_cache_key do @account.id.odd? ? "v1" : "v2" end end ``` rails module AbstractController::Helpers::ClassMethods module AbstractController::Helpers::ClassMethods ================================================= \_helpers[W] \_helpers\_for\_modification() Show source ``` # File actionpack/lib/abstract_controller/helpers.rb, line 184 def _helpers_for_modification unless @_helpers self._helpers = define_helpers_module(self, superclass._helpers) end _helpers end ``` clear\_helpers() Show source ``` # File actionpack/lib/abstract_controller/helpers.rb, line 158 def clear_helpers inherited_helper_methods = _helper_methods self._helpers = Module.new self._helper_methods = Array.new inherited_helper_methods.each { |meth| helper_method meth } default_helper_module! unless anonymous? end ``` Clears up all existing helpers in this class, only keeping the helper with the same name as this class. helper(\*args, &block) Show source ``` # File actionpack/lib/abstract_controller/helpers.rb, line 147 def helper(*args, &block) modules_for_helpers(args).each do |mod| next if _helpers.include?(mod) _helpers_for_modification.include(mod) end _helpers_for_modification.module_eval(&block) if block_given? end ``` Includes the given modules in the template class. Modules can be specified in different ways. All of the following calls include `FooHelper`: ``` # Module, recommended. helper FooHelper # String/symbol without the "helper" suffix, camel or snake case. helper "Foo" helper :Foo helper "foo" helper :foo ``` The last two assume that `"foo".camelize` returns “Foo”. When strings or symbols are passed, the method finds the actual module object using +String#constantize+. Therefore, if the module has not been yet loaded, it has to be autoloadable, which is normally the case. Namespaces are supported. The following calls include `Foo::BarHelper`: ``` # Module, recommended. helper Foo::BarHelper # String/symbol without the "helper" suffix, camel or snake case. helper "Foo::Bar" helper :"Foo::Bar" helper "foo/bar" helper :"foo/bar" ``` The last two assume that `"foo/bar".camelize` returns “Foo::Bar”. The method accepts a block too. If present, the block is evaluated in the context of the controller helper module. This simple call makes the `wadus` method available in templates of the enclosing controller: ``` helper do def wadus "wadus" end end ``` Furthermore, all the above styles can be mixed together: ``` helper FooHelper, "woo", "bar/baz" do def wadus "wadus" end end ``` helper\_method(\*methods) Show source ``` # File actionpack/lib/abstract_controller/helpers.rb, line 79 def helper_method(*methods) methods.flatten! self._helper_methods += methods location = caller_locations(1, 1).first file, line = location.path, location.lineno methods.each do |method| _helpers_for_modification.class_eval <<~ruby_eval, file, line def #{method}(*args, &block) # def current_user(*args, &block) controller.send(:'#{method}', *args, &block) # controller.send(:'current_user', *args, &block) end # end ruby2_keywords(:'#{method}') ruby_eval end end ``` Declare a controller method as a helper. For example, the following makes the `current_user` and `logged_in?` controller methods available to the view: ``` class ApplicationController < ActionController::Base helper_method :current_user, :logged_in? def current_user @current_user ||= User.find_by(id: session[:user]) end def logged_in? current_user != nil end end ``` In a view: ``` <% if logged_in? -%>Welcome, <%= current_user.name %><% end -%> ``` #### Parameters * `method[, method]` - A name or names of a method on the controller to be made available on the view. inherited(klass) Show source ``` # File actionpack/lib/abstract_controller/helpers.rb, line 48 def inherited(klass) # Inherited from parent by default klass._helpers = nil klass.class_eval { default_helper_module! } unless klass.anonymous? super end ``` When a class is inherited, wrap its helper module in a new module. This ensures that the parent class's module can be changed independently of the child class's. Calls superclass method modules\_for\_helpers(modules\_or\_helper\_prefixes) Show source ``` # File actionpack/lib/abstract_controller/helpers.rb, line 169 def modules_for_helpers(modules_or_helper_prefixes) modules_or_helper_prefixes.flatten.map! do |module_or_helper_prefix| case module_or_helper_prefix when Module module_or_helper_prefix when String, Symbol helper_prefix = module_or_helper_prefix.to_s helper_prefix = helper_prefix.camelize unless helper_prefix.start_with?(/[A-Z]/) "#{helper_prefix}Helper".constantize else raise ArgumentError, "helper must be a String, Symbol, or Module" end end end ``` Given an array of values like the ones accepted by `helper`, this method returns an array with the corresponding modules, in the same order. rails class ActionDispatch::RequestId class ActionDispatch::RequestId ================================ Parent: [Object](../object) Makes a unique request id available to the `action_dispatch.request_id` env variable (which is then accessible through `ActionDispatch::Request#request_id` or the alias `ActionDispatch::Request#uuid`) and sends the same id to the client via the X-Request-Id header. The unique request id is either based on the X-Request-Id header in the request, which would typically be generated by a firewall, load balancer, or the web server, or, if this header is not available, a random uuid. If the header is accepted from the outside world, we sanitize it to a max of 255 chars and alphanumeric and dashes only. The unique request id can be used to trace a request end-to-end and would typically end up being part of log files from multiple pieces of the stack. new(app, header:) Show source ``` # File actionpack/lib/action_dispatch/middleware/request_id.rb, line 18 def initialize(app, header:) @app = app @header = header end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/request_id.rb, line 23 def call(env) req = ActionDispatch::Request.new env req.request_id = make_request_id(req.headers[@header]) @app.call(env).tap { |_status, headers, _body| headers[@header] = req.request_id } end ``` rails class ActionDispatch::Callbacks class ActionDispatch::Callbacks ================================ Parent: [Object](../object) Included modules: [ActiveSupport::Callbacks](../activesupport/callbacks) Provides callbacks to be executed before and after dispatching the request. after(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/callbacks.rb, line 15 def after(*args, &block) set_callback(:call, :after, *args, &block) end ``` before(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/callbacks.rb, line 11 def before(*args, &block) set_callback(:call, :before, *args, &block) end ``` new(app) Show source ``` # File actionpack/lib/action_dispatch/middleware/callbacks.rb, line 20 def initialize(app) @app = app end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/callbacks.rb, line 24 def call(env) error = nil result = run_callbacks :call do @app.call(env) rescue => error end raise error if error result end ``` rails module ActionDispatch::Routing module ActionDispatch::Routing =============================== The routing module provides URL rewriting in native Ruby. It's a way to redirect incoming requests to controllers and actions. This replaces mod\_rewrite rules. Best of all, Rails' Routing works with any web server. Routes are defined in `config/routes.rb`. Think of creating routes as drawing a map for your requests. The map tells them where to go based on some predefined pattern: ``` Rails.application.routes.draw do Pattern 1 tells some request to go to one place Pattern 2 tell them to go to another ... end ``` The following symbols are special: ``` :controller maps to your controller name :action maps to an action with your controllers ``` Other names simply map to a parameter as in the case of `:id`. Resources --------- Resource routing allows you to quickly declare all of the common routes for a given resourceful controller. Instead of declaring separate routes for your `index`, `show`, `new`, `edit`, `create`, `update` and `destroy` actions, a resourceful route declares them in a single line of code: ``` resources :photos ``` Sometimes, you have a resource that clients always look up without referencing an ID. A common example, /profile always shows the profile of the currently logged in user. In this case, you can use a singular resource to map /profile (rather than /profile/:id) to the show action. ``` resource :profile ``` It's common to have resources that are logically children of other resources: ``` resources :magazines do resources :ads end ``` You may wish to organize groups of controllers under a namespace. Most commonly, you might group a number of administrative controllers under an `admin` namespace. You would place these controllers under the `app/controllers/admin` directory, and you can group them together in your router: ``` namespace "admin" do resources :posts, :comments end ``` Alternatively, you can add prefixes to your path without using a separate directory by using `scope`. `scope` takes additional options which apply to all enclosed routes. ``` scope path: "/cpanel", as: 'admin' do resources :posts, :comments end ``` For more, see `Routing::Mapper::Resources#resources`, `Routing::Mapper::Scoping#namespace`, and `Routing::Mapper::Scoping#scope`. Non-resourceful routes ---------------------- For routes that don't fit the `resources` mold, you can use the HTTP helper methods `get`, `post`, `patch`, `put` and `delete`. ``` get 'post/:id', to: 'posts#show' post 'post/:id', to: 'posts#create_comment' ``` Now, if you POST to `/posts/:id`, it will route to the `create_comment` action. A GET on the same URL will route to the `show` action. If your route needs to respond to more than one HTTP method (or all methods) then using the `:via` option on `match` is preferable. ``` match 'post/:id', to: 'posts#show', via: [:get, :post] ``` Named routes ------------ Routes can be named by passing an `:as` option, allowing for easy reference within your source as `name_of_route_url` for the full URL and `name_of_route_path` for the URI path. Example: ``` # In config/routes.rb get '/login', to: 'accounts#login', as: 'login' # With render, redirect_to, tests, etc. redirect_to login_url ``` Arguments can be passed as well. ``` redirect_to show_item_path(id: 25) ``` Use `root` as a shorthand to name a route for the root path “/”. ``` # In config/routes.rb root to: 'blogs#index' # would recognize http://www.example.com/ as params = { controller: 'blogs', action: 'index' } # and provide these named routes root_url # => 'http://www.example.com/' root_path # => '/' ``` Note: when using `controller`, the route is simply named after the method you call on the block parameter rather than map. ``` # In config/routes.rb controller :blog do get 'blog/show', to: :list get 'blog/delete', to: :delete get 'blog/edit', to: :edit end # provides named routes for show, delete, and edit link_to @article.title, blog_show_path(id: @article.id) ``` Pretty URLs ----------- Routes can generate pretty URLs. For example: ``` get '/articles/:year/:month/:day', to: 'articles#find_by_id', constraints: { year: /\d{4}/, month: /\d{1,2}/, day: /\d{1,2}/ } ``` Using the route above, the URL “localhost:3000/articles/2005/11/06” maps to ``` params = {year: '2005', month: '11', day: '06'} ``` Regular Expressions and parameters ---------------------------------- You can specify a regular expression to define a format for a parameter. ``` controller 'geocode' do get 'geocode/:postalcode', to: :show, constraints: { postalcode: /\d{5}(-\d{4})?/ } end ``` Constraints can include the 'ignorecase' and 'extended syntax' regular expression modifiers: ``` controller 'geocode' do get 'geocode/:postalcode', to: :show, constraints: { postalcode: /hx\d\d\s\d[a-z]{2}/i } end controller 'geocode' do get 'geocode/:postalcode', to: :show, constraints: { postalcode: /# Postalcode format \d{5} #Prefix (-\d{4})? #Suffix /x } end ``` Using the multiline modifier will raise an `ArgumentError`. Encoding regular expression modifiers are silently ignored. The match will always use the default encoding or ASCII. External redirects ------------------ You can redirect any path to another path using the redirect helper in your router: ``` get "/stories", to: redirect("/posts") ``` Unicode character routes ------------------------ You can specify unicode character routes in your router: ``` get "こんにちは", to: "welcome#index" ``` [`Routing`](routing) to Rack Applications ------------------------------------------ Instead of a [`String`](../string), like `posts#index`, which corresponds to the index action in the PostsController, you can specify any Rack application as the endpoint for a matcher: ``` get "/application.js", to: Sprockets ``` Reloading routes ---------------- You can reload routes if you feel you must: ``` Rails.application.reload_routes! ``` This will clear all named routes and reload config/routes.rb if the file has been modified from last load. To absolutely force reloading, use `reload!`. Testing Routes -------------- The two main methods for testing your routes: ### `assert_routing` ``` def test_movie_route_properly_splits opts = {controller: "plugin", action: "checkout", id: "2"} assert_routing "plugin/checkout/2", opts end ``` `assert_routing` lets you test whether or not the route properly resolves into options. ### `assert_recognizes` ``` def test_route_has_options opts = {controller: "plugin", action: "show", id: "12"} assert_recognizes opts, "/plugins/show/12" end ``` Note the subtle difference between the two: `assert_routing` tests that a URL fits options while `assert_recognizes` tests that a URL breaks into parameters properly. In tests you can simply pass the URL or named route to `get` or `post`. ``` def send_to_jail get '/jail' assert_response :success end def goes_to_login get login_url #... end ``` View a list of all your routes ------------------------------ ``` rails routes ``` Target a specific controller with `-c`, or grep routes using `-g`. Useful in conjunction with `--expanded` which displays routes vertically.
programming_docs
rails class ActionDispatch::RemoteIp class ActionDispatch::RemoteIp =============================== Parent: [Object](../object) This middleware calculates the IP address of the remote client that is making the request. It does this by checking various headers that could contain the address, and then picking the last-set address that is not on the list of trusted IPs. This follows the precedent set by e.g. [the Tomcat server](https://issues.apache.org/bugzilla/show_bug.cgi?id=50453), with [reasoning explained at length](https://blog.gingerlime.com/2012/rails-ip-spoofing-vulnerabilities-and-protection) by @gingerlime. A more detailed explanation of the algorithm is given at [`GetIp#calculate_ip`](remoteip/getip#method-i-calculate_ip). Some Rack servers concatenate repeated headers, like [HTTP RFC 2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2) requires. Some Rack servers simply drop preceding headers, and only report the value that was [given in the last header](https://andre.arko.net/2011/12/26/repeated-headers-and-ruby-web-servers). If you are behind multiple proxy servers (like NGINX to HAProxy to Unicorn) then you should test your Rack server to make sure your data is good. IF YOU DON'T USE A PROXY, THIS MAKES YOU VULNERABLE TO IP SPOOFING. This middleware assumes that there is at least one proxy sitting around and setting headers with the client's remote IP address. If you don't use a proxy, because you are hosted on e.g. Heroku without [`SSL`](ssl), any client can claim to have any IP address by setting the X-Forwarded-For header. If you care about that, then you need to explicitly drop or ignore those headers sometime before this middleware runs. TRUSTED\_PROXIES The default trusted IPs list simply includes IP addresses that are guaranteed by the IP specification to be private addresses. Those will not be the ultimate client IP in production, and so are discarded. See [en.wikipedia.org/wiki/Private\_network](https://en.wikipedia.org/wiki/Private_network) for details. check\_ip[R] proxies[R] new(app, ip\_spoofing\_check = true, custom\_proxies = nil) Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 60 def initialize(app, ip_spoofing_check = true, custom_proxies = nil) @app = app @check_ip = ip_spoofing_check @proxies = if custom_proxies.blank? TRUSTED_PROXIES elsif custom_proxies.respond_to?(:any?) custom_proxies else ActiveSupport::Deprecation.warn(<<~EOM) Setting config.action_dispatch.trusted_proxies to a single value has been deprecated. Please set this to an enumerable instead. For example, instead of: config.action_dispatch.trusted_proxies = IPAddr.new("10.0.0.0/8") Wrap the value in an Array: config.action_dispatch.trusted_proxies = [IPAddr.new("10.0.0.0/8")] Note that unlike passing a single argument, passing an enumerable will *replace* the default set of trusted proxies. EOM Array(custom_proxies) + TRUSTED_PROXIES end end ``` Create a new `RemoteIp` middleware instance. The `ip_spoofing_check` option is on by default. When on, an exception is raised if it looks like the client is trying to lie about its own IP address. It makes sense to turn off this check on sites aimed at non-IP clients (like WAP devices), or behind proxies that set headers in an incorrect or confusing way (like AWS ELB). The `custom_proxies` argument can take an enumerable which will be used instead of `TRUSTED_PROXIES`. Any proxy setup will put the value you want in the middle (or at the beginning) of the X-Forwarded-For list, with your proxy servers after it. If your proxies aren't removed, pass them in via the `custom_proxies` parameter. That way, the middleware will ignore those IP addresses, and return the one that you want. call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 90 def call(env) req = ActionDispatch::Request.new env req.remote_ip = GetIp.new(req, check_ip, proxies) @app.call(req.env) end ``` Since the IP address may not be needed, we store the object here without calculating the IP to keep from slowing down the majority of requests. For those requests that do need to know the IP, the [`GetIp#calculate_ip`](remoteip/getip#method-i-calculate_ip) method will calculate the memoized client IP address. rails class ActionDispatch::MiddlewareStack class ActionDispatch::MiddlewareStack ====================================== Parent: [Object](../object) Included modules: [Enumerable](../enumerable) middlewares[RW] new(\*args) { |self| ... } Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 70 def initialize(*args) @middlewares = [] yield(self) if block_given? end ``` [](i) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 87 def [](i) middlewares[i] end ``` build(app = nil, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 160 def build(app = nil, &block) instrumenting = ActiveSupport::Notifications.notifier.listening?(InstrumentationProxy::EVENT_NAME) middlewares.freeze.reverse.inject(app || block) do |a, e| if instrumenting e.build_instrumented(a) else e.build(a) end end end ``` delete(target) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 125 def delete(target) middlewares.reject! { |m| m.name == target.name } end ``` Deletes a middleware from the middleware stack. Returns the array of middlewares not including the deleted item, or returns nil if the target is not found. delete!(target) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 133 def delete!(target) delete(target) || (raise "No such middleware to remove: #{target.inspect}") end ``` Deletes a middleware from the middleware stack. Returns the array of middlewares not including the deleted item, or raises `RuntimeError` if the target is not found. each(&block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 75 def each(&block) @middlewares.each(&block) end ``` initialize\_copy(other) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 96 def initialize_copy(other) self.middlewares = other.middlewares.dup end ``` insert(index, klass, \*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 100 def insert(index, klass, *args, &block) index = assert_index(index, :before) middlewares.insert(index, build_middleware(klass, args, block)) end ``` Also aliased as: [insert\_before](middlewarestack#method-i-insert_before) insert\_after(index, \*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 108 def insert_after(index, *args, &block) index = assert_index(index, :after) insert(index + 1, *args, &block) end ``` insert\_before(index, klass, \*args, &block) Alias for: [insert](middlewarestack#method-i-insert) last() Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 83 def last middlewares.last end ``` move(target, source) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 137 def move(target, source) source_index = assert_index(source, :before) source_middleware = middlewares.delete_at(source_index) target_index = assert_index(target, :before) middlewares.insert(target_index, source_middleware) end ``` Also aliased as: [move\_before](middlewarestack#method-i-move_before) move\_after(target, source) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 147 def move_after(target, source) source_index = assert_index(source, :after) source_middleware = middlewares.delete_at(source_index) target_index = assert_index(target, :after) middlewares.insert(target_index + 1, source_middleware) end ``` move\_before(target, source) Alias for: [move](middlewarestack#method-i-move) size() Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 79 def size middlewares.size end ``` swap(target, \*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 114 def swap(target, *args, &block) index = assert_index(target, :before) insert(index, *args, &block) middlewares.delete_at(index + 1) end ``` unshift(klass, \*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 91 def unshift(klass, *args, &block) middlewares.unshift(build_middleware(klass, args, block)) end ``` use(klass, \*args, &block) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 155 def use(klass, *args, &block) middlewares.push(build_middleware(klass, args, block)) end ``` rails class ActionDispatch::Response class ActionDispatch::Response =============================== Parent: [Object](../object) Included modules: ActionDispatch::Http::FilterRedirect, [ActionDispatch::Http::Cache::Response](http/cache/response) Represents an HTTP response generated by a controller action. Use it to retrieve the current state of the response, or customize the response. It can either represent a real HTTP response (i.e. one that is meant to be sent back to the web browser) or a [`TestResponse`](testresponse) (i.e. one that is generated from integration tests). Response is mostly a Ruby on Rails framework implementation detail, and should never be used directly in controllers. Controllers should use the methods defined in [`ActionController::Base`](../actioncontroller/base) instead. For example, if you want to set the HTTP response's content MIME type, then use ActionControllerBase#headers instead of [`Response#headers`](response#attribute-i-headers). Nevertheless, integration tests may want to inspect controller responses in more detail, and that's when Response can be useful for application developers. `Integration` test methods such as ActionDispatch::Integration::Session#get and ActionDispatch::Integration::Session#post return objects of type [`TestResponse`](testresponse) (which are of course also of type Response). For example, the following demo integration test prints the body of the controller response to the console: ``` class DemoControllerTest < ActionDispatch::IntegrationTest def test_print_root_path_to_console get('/') puts response.body end end ``` CONTENT\_TYPE ContentTypeHeader LOCATION NO\_CONTENT\_CODES NullContentTypeHeader SET\_COOKIE header[R] Get headers for this response. headers[R] Get headers for this response. request[RW] The request that the response is responding to. status[R] The HTTP status code. stream[R] The underlying body, as a streamable object. create(status = 200, header = {}, body = [], default\_headers: self.default\_headers) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 150 def self.create(status = 200, header = {}, body = [], default_headers: self.default_headers) header = merge_default_headers(header, default_headers) new status, header, body end ``` merge\_default\_headers(original, default) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 155 def self.merge_default_headers(original, default) default.respond_to?(:merge) ? default.merge(original) : original end ``` new(status = 200, header = {}, body = []) { |self| ... } Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 162 def initialize(status = 200, header = {}, body = []) super() @header = Header.new(self, header) self.body, self.status = body, status @cv = new_cond @committed = false @sending = false @sent = false prepare_cache_control! yield self if block_given? end ``` Calls superclass method abort() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 371 def abort if stream.respond_to?(:abort) stream.abort elsif stream.respond_to?(:close) # `stream.close` should really be reserved for a close from the # other direction, but we must fall back to it for # compatibility. stream.close end end ``` await\_commit() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 184 def await_commit synchronize do @cv.wait_until { @committed } end end ``` await\_sent() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 190 def await_sent synchronize { @cv.wait_until { @sent } } end ``` body() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 305 def body @stream.body end ``` Returns the content of the response as a string. This contains the contents of any calls to `render`. body=(body) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 314 def body=(body) if body.respond_to?(:to_path) @stream = body else synchronize do @stream = build_buffer self, munge_body_object(body) end end end ``` Allows you to manually set or override the response body. body\_parts() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 358 def body_parts parts = [] @stream.each { |x| parts << x } parts end ``` charset() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 275 def charset header_info = parsed_content_type_header header_info.charset || self.class.default_charset end ``` The charset of the response. HTML wants to know the encoding of the content you're giving them, so we need to send that along. charset=(charset) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 264 def charset=(charset) content_type = parsed_content_type_header.mime_type if false == charset set_content_type content_type, nil else set_content_type content_type, charset || self.class.default_charset end end ``` Sets the HTTP character set. In case of `nil` parameter it sets the charset to `default_charset`. ``` response.charset = 'utf-16' # => 'utf-16' response.charset = nil # => 'utf-8' ``` close() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 367 def close stream.close if stream.respond_to?(:close) end ``` code() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 286 def code @status.to_s end ``` Returns a string to ensure compatibility with `Net::HTTPResponse`. commit!() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 194 def commit! synchronize do before_committed @committed = true @cv.broadcast end end ``` committed?() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 218 def committed?; synchronize { @committed }; end ``` content\_type() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 244 def content_type super.presence end ``` Content type of response. Calls superclass method content\_type=(content\_type) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 234 def content_type=(content_type) return unless content_type new_header_info = parse_content_type(content_type.to_s) prev_header_info = parsed_content_type_header charset = new_header_info.charset || prev_header_info.charset charset ||= self.class.default_charset unless prev_header_info.mime_type set_content_type new_header_info.mime_type, charset end ``` Sets the HTTP response's content MIME type. For example, in the controller you could write this: ``` response.content_type = "text/plain" ``` If a character set has been defined for this response (see charset=) then the character set information will also be included in the content type information. cookies() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 395 def cookies cookies = {} if header = get_header(SET_COOKIE) header = header.split("\n") if header.respond_to?(:to_str) header.each do |cookie| if pair = cookie.split(";").first key, value = pair.split("=").map { |v| Rack::Utils.unescape(v) } cookies[key] = value end end end cookies end ``` Returns the response cookies, converted to a [`Hash`](../hash) of (name => value) pairs ``` assert_equal 'AuthorOfNewPage', r.cookies['author'] ``` delete\_header(key) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 182 def delete_header(key); headers.delete key; end ``` each(&block) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 74 def each(&block) sending! x = @stream.each(&block) sent! x end ``` get\_header(key) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 180 def get_header(key); headers[key]; end ``` has\_header?(key) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 179 def has_header?(key); headers.key? key; end ``` media\_type() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 249 def media_type parsed_content_type_header.mime_type end ``` Media type of response. message() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 298 def message Rack::Utils::HTTP_STATUS_CODES[@status] end ``` Returns the corresponding message for the current HTTP status code: ``` response.status = 200 response.message # => "OK" response.status = 404 response.message # => "Not Found" ``` Also aliased as: [status\_message](response#method-i-status_message) prepare!() Alias for: [to\_a](response#method-i-to_a) reset\_body!() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 354 def reset_body! @stream = build_buffer(self, []) end ``` response\_code() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 281 def response_code @status end ``` The response code of the request. send\_file(path) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 349 def send_file(path) commit! @stream = FileBody.new(path) end ``` Send the file stored at `path` as the response body. sending!() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 202 def sending! synchronize do before_sending @sending = true @cv.broadcast end end ``` sending?() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 217 def sending?; synchronize { @sending }; end ``` sending\_file=(v) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 253 def sending_file=(v) if true == v self.charset = false end end ``` sent!() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 210 def sent! synchronize do @sent = true @cv.broadcast end end ``` sent?() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 219 def sent?; synchronize { @sent }; end ``` set\_header(key, v) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 181 def set_header(key, v); headers[key] = v; end ``` status=(status) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 222 def status=(status) @status = Rack::Utils.status_code(status) end ``` Sets the HTTP status code. status\_message() Alias for: [message](response#method-i-message) to\_a() Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 386 def to_a commit! rack_response @status, @header.to_hash end ``` Turns the [`Response`](response) into a Rack-compatible array of the status, headers, and body. Allows explicit splatting: ``` status, headers, body = *response ``` Also aliased as: [prepare!](response#method-i-prepare-21) write(string) Show source ``` # File actionpack/lib/action_dispatch/http/response.rb, line 309 def write(string) @stream.write string end ```
programming_docs
rails class ActionDispatch::AssertionResponse class ActionDispatch::AssertionResponse ======================================== Parent: [Object](../object) This is a class that abstracts away an asserted response. It purposely does not inherit from [`Response`](response) because it doesn't need it. That means it does not have headers or a body. code[R] name[R] new(code\_or\_name) Show source ``` # File actionpack/lib/action_dispatch/testing/assertion_response.rb, line 20 def initialize(code_or_name) if code_or_name.is_a?(Symbol) @name = code_or_name @code = code_from_name(code_or_name) else @name = name_from_code(code_or_name) @code = code_or_name end raise ArgumentError, "Invalid response name: #{name}" if @code.nil? raise ArgumentError, "Invalid response code: #{code}" if @name.nil? end ``` Accepts a specific response status code as an [`Integer`](../integer) (404) or [`String`](../string) ('404') or a response status range as a `Symbol` pseudo-code (:success, indicating any 200-299 status code). code\_and\_name() Show source ``` # File actionpack/lib/action_dispatch/testing/assertion_response.rb, line 33 def code_and_name "#{code}: #{name}" end ``` rails class ActionDispatch::Cookies class ActionDispatch::Cookies ============================== Parent: [Object](../object) Read and write data to cookies through ActionController#cookies. When reading cookie data, the data is read from the HTTP request header, Cookie. When writing cookie data, the data is sent out in the HTTP response header, Set-Cookie. Examples of writing: ``` # Sets a simple session cookie. # This cookie will be deleted when the user's browser is closed. cookies[:user_name] = "david" # Cookie values are String-based. Other data types need to be serialized. cookies[:lat_lon] = JSON.generate([47.68, -122.37]) # Sets a cookie that expires in 1 hour. cookies[:login] = { value: "XJ-122", expires: 1.hour } # Sets a cookie that expires at a specific time. cookies[:login] = { value: "XJ-122", expires: Time.utc(2020, 10, 15, 5) } # Sets a signed cookie, which prevents users from tampering with its value. # It can be read using the signed method `cookies.signed[:name]` cookies.signed[:user_id] = current_user.id # Sets an encrypted cookie value before sending it to the client which # prevent users from reading and tampering with its value. # It can be read using the encrypted method `cookies.encrypted[:name]` cookies.encrypted[:discount] = 45 # Sets a "permanent" cookie (which expires in 20 years from now). cookies.permanent[:login] = "XJ-122" # You can also chain these methods: cookies.signed.permanent[:login] = "XJ-122" ``` Examples of reading: ``` cookies[:user_name] # => "david" cookies.size # => 2 JSON.parse(cookies[:lat_lon]) # => [47.68, -122.37] cookies.signed[:login] # => "XJ-122" cookies.encrypted[:discount] # => 45 ``` Example for deleting: ``` cookies.delete :user_name ``` Please note that if you specify a :domain when setting a cookie, you must also specify the domain when deleting the cookie: ``` cookies[:name] = { value: 'a yummy cookie', expires: 1.year, domain: 'domain.com' } cookies.delete(:name, domain: 'domain.com') ``` The option symbols for setting cookies are: * `:value` - The cookie's value. * `:path` - The path for which this cookie applies. Defaults to the root of the application. * `:domain` - The domain for which this cookie applies so you can restrict to the domain level. If you use a schema like www.example.com and want to share session with user.example.com set `:domain` to `:all`. To support multiple domains, provide an array, and the first domain matching `request.host` will be used. Make sure to specify the `:domain` option with `:all` or `Array` again when deleting cookies. ``` domain: nil # Does not set cookie domain. (default) domain: :all # Allow the cookie for the top most level # domain and subdomains. domain: %w(.example.com .example.org) # Allow the cookie # for concrete domain names. ``` * `:tld_length` - When using `:domain => :all`, this option can be used to explicitly set the TLD length when using a short (<= 3 character) domain that is being interpreted as part of a TLD. For example, to share cookies between user1.lvh.me and user2.lvh.me, set `:tld_length` to 2. * `:expires` - The time at which this cookie expires, as a Time or [`ActiveSupport::Duration`](../activesupport/duration) object. * `:secure` - Whether this cookie is only transmitted to HTTPS servers. Default is `false`. * `:httponly` - Whether this cookie is accessible via scripting or only HTTP. Defaults to `false`. AUTHENTICATED\_ENCRYPTED\_COOKIE\_SALT COOKIES\_DIGEST COOKIES\_ROTATIONS COOKIES\_SAME\_SITE\_PROTECTION COOKIES\_SERIALIZER CookieOverflow Raised when storing more than 4K of session data. ENCRYPTED\_COOKIE\_CIPHER ENCRYPTED\_COOKIE\_SALT ENCRYPTED\_SIGNED\_COOKIE\_SALT GENERATOR\_KEY HTTP\_HEADER MAX\_COOKIE\_SIZE [`Cookies`](cookies) can typically store 4096 bytes. SECRET\_KEY\_BASE SIGNED\_COOKIE\_DIGEST SIGNED\_COOKIE\_SALT USE\_AUTHENTICATED\_COOKIE\_ENCRYPTION USE\_COOKIES\_WITH\_METADATA new(app) Show source ``` # File actionpack/lib/action_dispatch/middleware/cookies.rb, line 686 def initialize(app) @app = app end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/cookies.rb, line 690 def call(env) request = ActionDispatch::Request.new env status, headers, body = @app.call(env) if request.have_cookie_jar? cookie_jar = request.cookie_jar unless cookie_jar.committed? cookie_jar.write(headers) if headers[HTTP_HEADER].respond_to?(:join) headers[HTTP_HEADER] = headers[HTTP_HEADER].join("\n") end end end [status, headers, body] end ``` rails class ActionDispatch::SSL class ActionDispatch::SSL ========================== Parent: [Object](../object) This middleware is added to the stack when `config.force_ssl = true`, and is passed the options set in `config.ssl_options`. It does three jobs to enforce secure HTTP requests: 1. **TLS redirect**: Permanently redirects `http://` requests to `https://` with the same URL host, path, etc. Enabled by default. Set `config.ssl_options` to modify the destination URL (e.g. `redirect: { host: "secure.widgets.com", port: 8080 }`), or set `redirect: false` to disable this feature. Requests can opt-out of redirection with `exclude`: ``` config.ssl_options = { redirect: { exclude: -> request { /healthcheck/.match?(request.path) } } } ``` [`Cookies`](cookies) will not be flagged as secure for excluded requests. 2. **Secure cookies**: Sets the `secure` flag on cookies to tell browsers they must not be sent along with `http://` requests. Enabled by default. Set `config.ssl_options` with `secure_cookies: false` to disable this feature. 3. **HTTP Strict Transport Security (HSTS)**: Tells the browser to remember this site as TLS-only and automatically redirect non-TLS requests. Enabled by default. Configure `config.ssl_options` with `hsts: false` to disable. Set `config.ssl_options` with `hsts: { ... }` to configure HSTS: * `expires`: How long, in seconds, these settings will stick. The minimum required to qualify for browser preload lists is 1 year. Defaults to 2 years (recommended). * `subdomains`: Set to `true` to tell the browser to apply these settings to all subdomains. This protects your cookies from interception by a vulnerable site on a subdomain. Defaults to `true`. * `preload`: Advertise that this site may be included in browsers' preloaded HSTS lists. HSTS protects your site on every visit *except the first visit* since it hasn't seen your HSTS header yet. To close this gap, browser vendors include a baked-in list of HSTS-enabled sites. Go to [hstspreload.org](https://hstspreload.org) to submit your site for inclusion. Defaults to `false`.To turn off HSTS, omitting the header is not enough. Browsers will remember the original HSTS directive until it expires. Instead, use the header to tell browsers to expire HSTS immediately. Setting `hsts: false` is a shortcut for `hsts: { expires: 0 }`. rails class ActionDispatch::Flash class ActionDispatch::Flash ============================ Parent: [Object](../object) The flash provides a way to pass temporary primitive-types ([`String`](../string), [`Array`](../array), [`Hash`](../hash)) between actions. Anything you place in the flash will be exposed to the very next action and then cleared out. This is a great way of doing notices and alerts, such as a create action that sets `flash[:notice] = "Post successfully created"` before redirecting to a display action that can then expose the flash to its template. Actually, that exposure is automatically done. ``` class PostsController < ActionController::Base def create # save post flash[:notice] = "Post successfully created" redirect_to @post end def show # doesn't need to assign the flash notice to the template, that's done automatically end end show.html.erb <% if flash[:notice] %> <div class="notice"><%= flash[:notice] %></div> <% end %> ``` Since the `notice` and `alert` keys are a common idiom, convenience accessors are available: ``` flash.alert = "You must be logged in" flash.notice = "Post successfully created" ``` This example places a string in the flash. And of course, you can put as many as you like at a time too. If you want to pass non-primitive types, you will have to handle that in your application. Example: To show messages with links, you will have to use sanitize helper. Just remember: They'll be gone by the time the next action has been performed. See docs on the [`FlashHash`](flash/flashhash) class for more details about the flash. KEY new(app) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 292 def self.new(app) app; end ``` rails class ActionDispatch::PublicExceptions class ActionDispatch::PublicExceptions ======================================= Parent: [Object](../object) When called, this middleware renders an error page. By default if an HTML response is expected it will render static error pages from the `/public` directory. For example when this middleware receives a 500 response it will render the template found in `/public/500.html`. If an internationalized locale is set, this middleware will attempt to render the template in `/public/500.<locale>.html`. If an internationalized template is not found it will fall back on `/public/500.html`. When a request with a content type other than HTML is made, this middleware will attempt to convert error information into the appropriate response type. public\_path[RW] new(public\_path) Show source ``` # File actionpack/lib/action_dispatch/middleware/public_exceptions.rb, line 17 def initialize(public_path) @public_path = public_path end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/public_exceptions.rb, line 21 def call(env) request = ActionDispatch::Request.new(env) status = request.path_info[1..-1].to_i begin content_type = request.formats.first rescue ActionDispatch::Http::MimeNegotiation::InvalidType content_type = Mime[:text] end body = { status: status, error: Rack::Utils::HTTP_STATUS_CODES.fetch(status, Rack::Utils::HTTP_STATUS_CODES[500]) } render(status, content_type, body) end ``` rails class ActionDispatch::TestRequest class ActionDispatch::TestRequest ================================== Parent: [ActionDispatch::Request](request) DEFAULT\_ENV create(env = {}) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 15 def self.create(env = {}) env = Rails.application.env_config.merge(env) if defined?(Rails.application) && Rails.application env["rack.request.cookie_hash"] ||= {}.with_indifferent_access new(default_env.merge(env)) end ``` Create a new test request with default `env` values. accept=(mime\_types) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 66 def accept=(mime_types) delete_header("action_dispatch.request.accepts") set_header("HTTP_ACCEPT", Array(mime_types).collect(&:to_s).join(",")) end ``` action=(action\_name) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 46 def action=(action_name) path_parameters[:action] = action_name.to_s end ``` host=(host) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 30 def host=(host) set_header("HTTP_HOST", host) end ``` if\_modified\_since=(last\_modified) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 50 def if_modified_since=(last_modified) set_header("HTTP_IF_MODIFIED_SINCE", last_modified) end ``` if\_none\_match=(etag) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 54 def if_none_match=(etag) set_header("HTTP_IF_NONE_MATCH", etag) end ``` path=(path) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 42 def path=(path) set_header("PATH_INFO", path) end ``` port=(number) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 34 def port=(number) set_header("SERVER_PORT", number.to_i) end ``` remote\_addr=(addr) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 58 def remote_addr=(addr) set_header("REMOTE_ADDR", addr) end ``` request\_method=(method) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 26 def request_method=(method) super(method.to_s.upcase) end ``` Calls superclass method request\_uri=(uri) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 38 def request_uri=(uri) set_header("REQUEST_URI", uri) end ``` user\_agent=(user\_agent) Show source ``` # File actionpack/lib/action_dispatch/testing/test_request.rb, line 62 def user_agent=(user_agent) set_header("HTTP_USER_AGENT", user_agent) end ``` rails class ActionDispatch::HostAuthorization class ActionDispatch::HostAuthorization ======================================== Parent: [Object](../object) This middleware guards from DNS rebinding attacks by explicitly permitting the hosts a request can be sent to, and is passed the options set in `config.host_authorization`. Requests can opt-out of Host Authorization with `exclude`: ``` config.host_authorization = { exclude: ->(request) { request.path =~ /healthcheck/ } } ``` When a request comes to an unauthorized host, the `response_app` application will be executed and rendered. If no `response_app` is given, a default one will run. The default response app logs blocked host info with level 'error' and responds with `403 Forbidden`. The body of the response contains debug info if `config.consider_all_requests_local` is set to true, otherwise the body is empty. ALLOWED\_HOSTS\_IN\_DEVELOPMENT new(app, hosts, exclude: nil, response\_app: nil) Show source ``` # File actionpack/lib/action_dispatch/middleware/host_authorization.rb, line 122 def initialize(app, hosts, exclude: nil, response_app: nil) @app = app @permissions = Permissions.new(hosts) @exclude = exclude @response_app = response_app || DefaultResponseApp.new end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/host_authorization.rb, line 130 def call(env) return @app.call(env) if @permissions.empty? request = Request.new(env) if authorized?(request) || excluded?(request) mark_as_authorized(request) @app.call(env) else @response_app.call(env) end end ``` rails class ActionDispatch::IntegrationTest class ActionDispatch::IntegrationTest ====================================== Parent: [ActiveSupport::TestCase](../activesupport/testcase) Included modules: [ActionDispatch::TestProcess::FixtureFile](testprocess/fixturefile), ActionDispatch::IntegrationTest::Behavior An integration test spans multiple controllers and actions, tying them all together to ensure they work together as expected. It tests more completely than either unit or functional tests do, exercising the entire stack, from the dispatcher to the database. At its simplest, you simply extend `IntegrationTest` and write your tests using the get/post methods: ``` require "test_helper" class ExampleTest < ActionDispatch::IntegrationTest fixtures :people def test_login # get the login page get "/login" assert_equal 200, status # post the login and follow through to the home page post "/login", params: { username: people(:jamis).username, password: people(:jamis).password } follow_redirect! assert_equal 200, status assert_equal "/home", path end end ``` However, you can also have multiple session instances open per test, and even extend those instances with assertions and methods to create a very powerful testing DSL that is specific for your application. You can even reference any named routes you happen to have defined. ``` require "test_helper" class AdvancedTest < ActionDispatch::IntegrationTest fixtures :people, :rooms def test_login_and_speak jamis, david = login(:jamis), login(:david) room = rooms(:office) jamis.enter(room) jamis.speak(room, "anybody home?") david.enter(room) david.speak(room, "hello!") end private module CustomAssertions def enter(room) # reference a named route, for maximum internal consistency! get(room_url(id: room.id)) assert(...) ... end def speak(room, message) post "/say/#{room.id}", xhr: true, params: { message: message } assert(...) ... end end def login(who) open_session do |sess| sess.extend(CustomAssertions) who = people(who) sess.post "/login", params: { username: who.username, password: who.password } assert(...) end end end ``` Another longer example would be: A simple integration test that exercises multiple controllers: ``` require "test_helper" class UserFlowsTest < ActionDispatch::IntegrationTest test "login and browse site" do # login via https https! get "/login" assert_response :success post "/login", params: { username: users(:david).username, password: users(:david).password } follow_redirect! assert_equal '/welcome', path assert_equal 'Welcome david!', flash[:notice] https!(false) get "/articles/all" assert_response :success assert_select 'h1', 'Articles' end end ``` As you can see the integration test involves multiple controllers and exercises the entire stack from database to dispatcher. In addition you can have multiple session instances open simultaneously in a test and extend those instances with assertion methods to create a very powerful testing DSL (domain-specific language) just for your application. Here's an example of multiple sessions and custom DSL in an integration test ``` require "test_helper" class UserFlowsTest < ActionDispatch::IntegrationTest test "login and browse site" do # User david logs in david = login(:david) # User guest logs in guest = login(:guest) # Both are now available in different sessions assert_equal 'Welcome david!', david.flash[:notice] assert_equal 'Welcome guest!', guest.flash[:notice] # User david can browse site david.browses_site # User guest can browse site as well guest.browses_site # Continue with other assertions end private module CustomDsl def browses_site get "/products/all" assert_response :success assert_select 'h1', 'Products' end end def login(user) open_session do |sess| sess.extend(CustomDsl) u = users(user) sess.https! sess.post "/login", params: { username: u.username, password: u.password } assert_equal '/welcome', sess.path sess.https!(false) end end end ``` See the [request helpers documentation](integration/requesthelpers) for help on how to use `get`, etc. ### Changing the request encoding You can also test your JSON API easily by setting what the request should be encoded as: ``` require "test_helper" class ApiTest < ActionDispatch::IntegrationTest test "creates articles" do assert_difference -> { Article.count } do post articles_path, params: { article: { title: "Ahoy!" } }, as: :json end assert_response :success assert_equal({ id: Article.last.id, title: "Ahoy!" }, response.parsed_body) end end ``` The `as` option passes an “application/json” Accept header (thereby setting the request format to JSON unless overridden), sets the content type to “application/json” and encodes the parameters as JSON. Calling `parsed_body` on the response parses the response body based on the last response MIME type. Out of the box, only `:json` is supported. But for any custom MIME types you've registered, you can add your own encoders with: ``` ActionDispatch::IntegrationTest.register_encoder :wibble, param_encoder: -> params { params.to_wibble }, response_parser: -> body { body } ``` Where `param_encoder` defines how the params should be encoded and `response_parser` defines how the response body should be parsed through `parsed_body`. Consult the Rails Testing Guide for more.
programming_docs
rails class ActionDispatch::Static class ActionDispatch::Static ============================= Parent: [Object](../object) This middleware serves static files from disk, if available. If no file is found, it hands off to the main app. In Rails apps, this middleware is configured to serve assets from the `public/` directory. Only GET and HEAD requests are served. POST and other HTTP methods are handed off to the main app. Only files in the root directory are served; path traversal is denied. new(app, path, index: "index", headers: {}) Show source ``` # File actionpack/lib/action_dispatch/middleware/static.rb, line 17 def initialize(app, path, index: "index", headers: {}) @app = app @file_handler = FileHandler.new(path, index: index, headers: headers) end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/static.rb, line 22 def call(env) @file_handler.attempt(env) || @app.call(env) end ``` rails class ActionDispatch::FileHandler class ActionDispatch::FileHandler ================================== Parent: [Object](../object) This endpoint serves static files from disk using Rack::File. URL paths are matched with static files according to expected conventions: `path`, `path`.html, `path`/index.html. Precompressed versions of these files are checked first. Brotli (.br) and gzip (.gz) files are supported. If `path`.br exists, this endpoint returns that file with a `Content-Encoding: br` header. If no matching file is found, this endpoint responds 404 Not Found. Pass the `root` directory to search for matching files, an optional `index: "index"` to change the default `path`/index.html, and optional additional response headers. PRECOMPRESSED Accept-Encoding value -> file extension new(root, index: "index", headers: {}, precompressed: %i[ br gzip ], compressible\_content\_types: /\A(?:text\/|application\/javascript)/) Show source ``` # File actionpack/lib/action_dispatch/middleware/static.rb, line 49 def initialize(root, index: "index", headers: {}, precompressed: %i[ br gzip ], compressible_content_types: /\A(?:text\/|application\/javascript)/) @root = root.chomp("/").b @index = index @precompressed = Array(precompressed).map(&:to_s) | %w[ identity ] @compressible_content_types = compressible_content_types @file_server = ::Rack::File.new(@root, headers) end ``` attempt(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/static.rb, line 63 def attempt(env) request = Rack::Request.new env if request.get? || request.head? if found = find_file(request.path_info, accept_encoding: request.accept_encoding) serve request, *found end end end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/static.rb, line 59 def call(env) attempt(env) || @file_server.call(env) end ``` rails class ActionDispatch::DebugLocks class ActionDispatch::DebugLocks ================================= Parent: [Object](../object) This middleware can be used to diagnose deadlocks in the autoload interlock. To use it, insert it near the top of the middleware stack, using `config/application.rb`: ``` config.middleware.insert_before Rack::Sendfile, ActionDispatch::DebugLocks ``` After restarting the application and re-triggering the deadlock condition, the route `/rails/locks` will show a summary of all threads currently known to the interlock, which lock level they are holding or awaiting, and their current backtrace. Generally a deadlock will be caused by the interlock conflicting with some other external lock or blocking I/O call. These cannot be automatically identified, but should be visible in the displayed backtraces. NOTE: The formatting and content of this middleware's output is intended for human consumption, and should be expected to change between releases. This middleware exposes operational details of the server, with no access control. It should only be enabled when in use, and removed thereafter. new(app, path = "/rails/locks") Show source ``` # File actionpack/lib/action_dispatch/middleware/debug_locks.rb, line 26 def initialize(app, path = "/rails/locks") @app = app @path = path end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/debug_locks.rb, line 31 def call(env) req = ActionDispatch::Request.new env if req.get? path = req.path_info.chomp("/") if path == @path return render_details(req) end end @app.call(env) end ``` rails class ActionDispatch::TestResponse class ActionDispatch::TestResponse =================================== Parent: [ActionDispatch::Response](response) `Integration` test methods such as ActionDispatch::Integration::Session#get and ActionDispatch::Integration::Session#post return objects of class [`TestResponse`](testresponse), which represent the HTTP response results of the requested controller actions. See [`Response`](response) for more information on controller response objects. from\_response(response) Show source ``` # File actionpack/lib/action_dispatch/testing/test_response.rb, line 13 def self.from_response(response) new response.status, response.headers, response.body end ``` parsed\_body() Show source ``` # File actionpack/lib/action_dispatch/testing/test_response.rb, line 17 def parsed_body @parsed_body ||= response_parser.call(body) end ``` response\_parser() Show source ``` # File actionpack/lib/action_dispatch/testing/test_response.rb, line 21 def response_parser @response_parser ||= RequestEncoder.parser(media_type) end ``` rails class ActionDispatch::SystemTestCase class ActionDispatch::SystemTestCase ===================================== Parent: [ActiveSupport::TestCase](../activesupport/testcase) Included modules: [ActionDispatch::SystemTesting::TestHelpers::ScreenshotHelper](systemtesting/testhelpers/screenshothelper) System Testing ============== System tests let you test applications in the browser. Because system tests use a real browser experience, you can test all of your JavaScript easily from your test suite. To create a system test in your application, extend your test class from `ApplicationSystemTestCase`. System tests use Capybara as a base and allow you to configure the settings through your `application_system_test_case.rb` file that is generated with a new application or scaffold. Here is an example system test: ``` require "application_system_test_case" class Users::CreateTest < ApplicationSystemTestCase test "adding a new user" do visit users_path click_on 'New User' fill_in 'Name', with: 'Arya' click_on 'Create User' assert_text 'Arya' end end ``` When generating an application or scaffold, an `application_system_test_case.rb` file will also be generated containing the base class for system testing. This is where you can change the driver, add Capybara settings, and other configuration for your system tests. ``` require "test_helper" class ApplicationSystemTestCase < ActionDispatch::SystemTestCase driven_by :selenium, using: :chrome, screen_size: [1400, 1400] end ``` By default, `ActionDispatch::SystemTestCase` is driven by the Selenium driver, with the Chrome browser, and a browser size of 1400x1400. Changing the driver configuration options is easy. Let's say you want to use the Firefox browser instead of Chrome. In your `application_system_test_case.rb` file add the following: ``` require "test_helper" class ApplicationSystemTestCase < ActionDispatch::SystemTestCase driven_by :selenium, using: :firefox end ``` `driven_by` has a required argument for the driver name. The keyword arguments are `:using` for the browser and `:screen_size` to change the size of the browser screen. These two options are not applicable for headless drivers and will be silently ignored if passed. Headless browsers such as headless Chrome and headless Firefox are also supported. You can use these browsers by setting the `:using` argument to `:headless_chrome` or `:headless_firefox`. To use a headless driver, like Cuprite, update your Gemfile to use Cuprite instead of Selenium and then declare the driver name in the `application_system_test_case.rb` file. In this case, you would leave out the `:using` option because the driver is headless, but you can still use `:screen_size` to change the size of the browser screen, also you can use `:options` to pass options supported by the driver. Please refer to your driver documentation to learn about supported options. ``` require "test_helper" require "capybara/cuprite" class ApplicationSystemTestCase < ActionDispatch::SystemTestCase driven_by :cuprite, screen_size: [1400, 1400], options: { js_errors: true } end ``` Some drivers require browser capabilities to be passed as a block instead of through the `options` hash. As an example, if you want to add mobile emulation on chrome, you'll have to create an instance of selenium's `Chrome::Options` object and add capabilities with a block. The block will be passed an instance of `<Driver>::Options` where you can define the capabilities you want. Please refer to your driver documentation to learn about supported options. ``` class ApplicationSystemTestCase < ActionDispatch::SystemTestCase driven_by :selenium, using: :chrome, screen_size: [1024, 768] do |driver_option| driver_option.add_emulation(device_name: 'iPhone 6') driver_option.add_extension('path/to/chrome_extension.crx') end end ``` Because `ActionDispatch::SystemTestCase` is a shim between Capybara and Rails, any driver that is supported by Capybara is supported by system tests as long as you include the required gems and files. DEFAULT\_HOST driven\_by(driver, using: :chrome, screen\_size: [1400, 1400], options: {}, &capabilities) Show source ``` # File actionpack/lib/action_dispatch/system_test_case.rb, line 156 def self.driven_by(driver, using: :chrome, screen_size: [1400, 1400], options: {}, &capabilities) driver_options = { using: using, screen_size: screen_size, options: options } self.driver = SystemTesting::Driver.new(driver, **driver_options, &capabilities) end ``` System Test configuration options The default settings are Selenium, using Chrome, with a screen size of 1400x1400. Examples: ``` driven_by :cuprite driven_by :selenium, screen_size: [800, 800] driven_by :selenium, using: :chrome driven_by :selenium, using: :headless_chrome driven_by :selenium, using: :firefox driven_by :selenium, using: :headless_firefox ``` rails class ActionDispatch::Request class ActionDispatch::Request ============================== Parent: [Object](../object) Included modules: [ActionDispatch::Http::Cache::Request](http/cache/request), [ActionDispatch::Http::MimeNegotiation](http/mimenegotiation), [ActionDispatch::Http::Parameters](http/parameters), [ActionDispatch::Http::FilterParameters](http/filterparameters), [ActionDispatch::Http::URL](http/url), ActionDispatch::ContentSecurityPolicy::Request, ActionDispatch::PermissionsPolicy::Request ENV\_METHODS HTTP\_METHODS HTTP\_METHOD\_LOOKUP LOCALHOST RFC2518 RFC2616 List of HTTP request methods from the following RFCs: Hypertext Transfer Protocol – HTTP/1.1 ([www.ietf.org/rfc/rfc2616.txt](https://www.ietf.org/rfc/rfc2616.txt)) HTTP Extensions for Distributed Authoring – WEBDAV ([www.ietf.org/rfc/rfc2518.txt](https://www.ietf.org/rfc/rfc2518.txt)) Versioning Extensions to WebDAV ([www.ietf.org/rfc/rfc3253.txt](https://www.ietf.org/rfc/rfc3253.txt)) Ordered Collections Protocol (WebDAV) ([www.ietf.org/rfc/rfc3648.txt](https://www.ietf.org/rfc/rfc3648.txt)) Web Distributed Authoring and Versioning (WebDAV) Access Control Protocol ([www.ietf.org/rfc/rfc3744.txt](https://www.ietf.org/rfc/rfc3744.txt)) Web Distributed Authoring and Versioning (WebDAV) SEARCH ([www.ietf.org/rfc/rfc5323.txt](https://www.ietf.org/rfc/rfc5323.txt)) Calendar Extensions to WebDAV ([www.ietf.org/rfc/rfc4791.txt](https://www.ietf.org/rfc/rfc4791.txt)) PATCH [`Method`](../method) for HTTP ([www.ietf.org/rfc/rfc5789.txt](https://www.ietf.org/rfc/rfc5789.txt)) RFC3253 RFC3648 RFC3744 RFC4791 RFC5323 RFC5789 empty() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 56 def self.empty new({}) end ``` new(env) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 60 def initialize(env) super @method = nil @request_method = nil @remote_ip = nil @original_fullpath = nil @fullpath = nil @ip = nil end ``` Calls superclass method [`ActionDispatch::Http::URL::new`](http/url#method-c-new) GET() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 372 def GET fetch_header("action_dispatch.request.query_parameters") do |k| rack_query_params = super || {} controller = path_parameters[:controller] action = path_parameters[:action] rack_query_params = Request::Utils.set_binary_encoding(self, rack_query_params, controller, action) # Check for non UTF-8 parameter values, which would cause errors later Request::Utils.check_param_encoding(rack_query_params) set_header k, Request::Utils.normalize_encode_params(rack_query_params) end rescue Rack::Utils::ParameterTypeError, Rack::Utils::InvalidParameterError => e raise ActionController::BadRequest.new("Invalid query parameters: #{e.message}") end ``` Override Rack's [`GET`](request#method-i-GET) method to support indifferent access. Calls superclass method Also aliased as: [query\_parameters](request#method-i-query_parameters) POST() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 388 def POST fetch_header("action_dispatch.request.request_parameters") do pr = parse_formatted_parameters(params_parsers) do |params| super || {} end pr = Request::Utils.set_binary_encoding(self, pr, path_parameters[:controller], path_parameters[:action]) Request::Utils.check_param_encoding(pr) self.request_parameters = Request::Utils.normalize_encode_params(pr) end rescue Rack::Utils::ParameterTypeError, Rack::Utils::InvalidParameterError => e raise ActionController::BadRequest.new("Invalid request parameters: #{e.message}") end ``` Override Rack's [`POST`](request#method-i-POST) method to support indifferent access. Calls superclass method Also aliased as: [request\_parameters](request#method-i-request_parameters) authorization() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 404 def authorization get_header("HTTP_AUTHORIZATION") || get_header("X-HTTP_AUTHORIZATION") || get_header("X_HTTP_AUTHORIZATION") || get_header("REDIRECT_X_HTTP_AUTHORIZATION") end ``` Returns the authorization header regardless of whether it was specified directly or through one of the proxy alternatives. body() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 334 def body if raw_post = get_header("RAW_POST_DATA") raw_post = (+raw_post).force_encoding(Encoding::BINARY) StringIO.new(raw_post) else body_stream end end ``` The request body is an `IO` input stream. If the RAW\_POST\_DATA environment variable is already set, wrap it in a StringIO. commit\_flash() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 425 def commit_flash end ``` content\_length() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 270 def content_length super.to_i end ``` Returns the content length of the request as an integer. Calls superclass method controller\_class() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 79 def controller_class params = path_parameters params[:action] ||= "index" controller_class_for(params[:controller]) end ``` controller\_class\_for(name) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 85 def controller_class_for(name) if name controller_param = name.underscore const_name = controller_param.camelize << "Controller" begin const_name.constantize rescue NameError => error if error.missing_name == const_name || const_name.start_with?("#{error.missing_name}::") raise MissingController.new(error.message, error.name) else raise end end else PASS_NOT_FOUND end end ``` form\_data?() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 351 def form_data? FORM_DATA_MEDIA_TYPES.include?(media_type) end ``` Determine whether the request body contains form-data by checking the request Content-Type for one of the media-types: “application/x-www-form-urlencoded” or “multipart/form-data”. The list of form-data media types can be modified through the `FORM_DATA_MEDIA_TYPES` array. A request body is not assumed to contain form-data when no Content-Type header is provided and the [`request_method`](request#method-i-request_method) is [`POST`](request#method-i-POST). fullpath() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 249 def fullpath @fullpath ||= super end ``` Returns the `String` full path including params of the last URL requested. ``` # get "/articles" request.fullpath # => "/articles" # get "/articles?page=2" request.fullpath # => "/articles?page=2" ``` Calls superclass method headers() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 210 def headers @headers ||= Http::Headers.new(self) end ``` Provides access to the request's HTTP headers, for example: ``` request.headers["Content-Type"] # => "text/plain" ``` http\_auth\_salt() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 179 def http_auth_salt get_header "action_dispatch.http_auth_salt" end ``` ip() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 283 def ip @ip ||= super end ``` Returns the IP address of client as a `String`. Calls superclass method key?(key) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 106 def key?(key) has_header? key end ``` Returns true if the request has a header matching the given key parameter. ``` request.key? :ip_spoofing_check # => true ``` local?() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 412 def local? LOCALHOST.match?(remote_addr) && LOCALHOST.match?(remote_ip) end ``` True if the request came from localhost, 127.0.0.1, or ::1. logger() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 421 def logger get_header("action_dispatch.logger") end ``` media\_type() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 265 def media_type content_mime_type&.to_s end ``` The `String` MIME type of the request. ``` # get "/articles" request.media_type # => "application/x-www-form-urlencoded" ``` method() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 198 def method @method ||= check_method(get_header("rack.methodoverride.original_method") || get_header("REQUEST_METHOD")) end ``` Returns the original value of the environment's REQUEST\_METHOD, even if it was overridden by middleware. See [`request_method`](request#method-i-request_method) for more information. method\_symbol() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 203 def method_symbol HTTP_METHOD_LOOKUP[method] end ``` Returns a symbol form of the [`method`](request#method-i-method). original\_fullpath() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 238 def original_fullpath @original_fullpath ||= (get_header("ORIGINAL_FULLPATH") || fullpath) end ``` Returns a `String` with the last requested path including their params. ``` # get '/foo' request.original_fullpath # => '/foo' # get '/foo?bar' request.original_fullpath # => '/foo?bar' ``` original\_url() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 257 def original_url base_url + original_fullpath end ``` Returns the original request URL as a `String`. ``` # get "/articles?page=2" request.original_url # => "http://www.example.com/articles?page=2" ``` query\_parameters() Alias for: [GET](request#method-i-GET) raw\_post() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 323 def raw_post unless has_header? "RAW_POST_DATA" raw_post_body = body set_header("RAW_POST_DATA", raw_post_body.read(content_length)) raw_post_body.rewind if raw_post_body.respond_to?(:rewind) end get_header "RAW_POST_DATA" end ``` Read the request body. This is useful for web services that need to work with raw requests directly. raw\_request\_method() Alias for: [request\_method](request#method-i-request_method) remote\_ip() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 289 def remote_ip @remote_ip ||= (get_header("action_dispatch.remote_ip") || ip).to_s end ``` Returns the IP address of client as a `String`, usually set by the [`RemoteIp`](remoteip) middleware. remote\_ip=(remote\_ip) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 293 def remote_ip=(remote_ip) @remote_ip = nil set_header "action_dispatch.remote_ip", remote_ip end ``` request\_id() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 306 def request_id get_header ACTION_DISPATCH_REQUEST_ID end ``` Returns the unique request id, which is based on either the X-Request-Id header that can be generated by a firewall, load balancer, or web server or by the [`RequestId`](requestid) middleware (which sets the action\_dispatch.request\_id environment variable). This unique ID is useful for tracing a request from end-to-end as part of logging or debugging. This relies on the Rack variable set by the [`ActionDispatch::RequestId`](requestid) middleware. Also aliased as: [uuid](request#method-i-uuid) request\_method() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 145 def request_method @request_method ||= check_method(super) end ``` Returns the HTTP method that the application should see. In the case where the method was overridden by a middleware (for instance, if a HEAD request was converted to a [`GET`](request#method-i-GET), or if a \_method parameter was used to determine the method the application should use), this method returns the overridden value, not the original. Calls superclass method Also aliased as: [raw\_request\_method](request#method-i-raw_request_method) request\_method\_symbol() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 191 def request_method_symbol HTTP_METHOD_LOOKUP[request_method] end ``` Returns a symbol form of the [`request_method`](request#method-i-request_method). request\_parameters() Alias for: [POST](request#method-i-POST) request\_parameters=(params) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 416 def request_parameters=(params) raise if params.nil? set_header("action_dispatch.request.request_parameters", params) end ``` reset\_session() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 359 def reset_session session.destroy end ``` send\_early\_hints(links) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 225 def send_early_hints(links) return unless env["rack.early_hints"] env["rack.early_hints"].call(links) end ``` Early Hints is an HTTP/2 status code that indicates hints to help a client start making preparations for processing the final response. If the env contains `rack.early_hints` then the server accepts HTTP2 push for Link headers. The `send_early_hints` method accepts a hash of links as follows: ``` send_early_hints("Link" => "</style.css>; rel=preload; as=style\n</script.js>; rel=preload") ``` If you are using `javascript_include_tag` or `stylesheet_link_tag` the Early Hints headers are included by default if supported. server\_software() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 317 def server_software (get_header("SERVER_SOFTWARE") && /^([a-zA-Z]+)/ =~ get_header("SERVER_SOFTWARE")) ? $1.downcase : nil end ``` Returns the lowercase name of the HTTP server software. session\_options=(options) Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 367 def session_options=(options) Session::Options.set self, options end ``` uuid() Alias for: [request\_id](request#method-i-request_id) xhr?() Alias for: [xml\_http\_request?](request#method-i-xml_http_request-3F) xml\_http\_request?() Show source ``` # File actionpack/lib/action_dispatch/http/request.rb, line 277 def xml_http_request? /XMLHttpRequest/i.match?(get_header("HTTP_X_REQUESTED_WITH")) end ``` Returns true if the “X-Requested-With” header contains “XMLHttpRequest” (case-insensitive), which may need to be manually added depending on the choice of JavaScript libraries and frameworks. Also aliased as: [xhr?](request#method-i-xhr-3F)
programming_docs
rails module ActionDispatch::SystemTesting::TestHelpers::ScreenshotHelper module ActionDispatch::SystemTesting::TestHelpers::ScreenshotHelper ==================================================================== Screenshot helper for system testing. take\_failed\_screenshot() Show source ``` # File actionpack/lib/action_dispatch/system_testing/test_helpers/screenshot_helper.rb, line 44 def take_failed_screenshot take_screenshot if failed? && supports_screenshot? && Capybara::Session.instance_created? end ``` Takes a screenshot of the current page in the browser if the test failed. `take_failed_screenshot` is called during system test teardown. take\_screenshot() Show source ``` # File actionpack/lib/action_dispatch/system_testing/test_helpers/screenshot_helper.rb, line 33 def take_screenshot increment_unique save_html if save_html? save_image puts display_image end ``` Takes a screenshot of the current page in the browser. `take_screenshot` can be used at any point in your system tests to take a screenshot of the current state. This can be useful for debugging or automating visual testing. You can take multiple screenshots per test to investigate changes at different points during your test. These will be named with a sequential prefix (or 'failed' for failing tests) The screenshot will be displayed in your console, if supported. The default screenshots directory is `tmp/screenshots` but you can set a different one with `Capybara.save_path` You can set the `RAILS_SYSTEM_TESTING_SCREENSHOT_HTML` environment variable to save the HTML from the page that is being screenshotted so you can investigate the elements on the page at the time of the screenshot You can set the `RAILS_SYSTEM_TESTING_SCREENSHOT` environment variable to control the output. Possible values are: * `simple` (default) Only displays the screenshot path. This is the default value. * `inline` Display the screenshot in the terminal using the iTerm image protocol ([iterm2.com/documentation-images.html](https://iterm2.com/documentation-images.html)). * `artifact` Display the screenshot in the terminal, using the terminal artifact format ([buildkite.github.io/terminal-to-html/inline-images](https://buildkite.github.io/terminal-to-html/inline-images)/). rails module ActionDispatch::Integration::RequestHelpers module ActionDispatch::Integration::RequestHelpers =================================================== delete(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 39 def delete(path, **args) process(:delete, path, **args) end ``` Performs a DELETE request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. follow\_redirect!(\*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 61 def follow_redirect!(**args) raise "not a redirect! #{status} #{status_message}" unless redirect? method = if [307, 308].include?(response.status) request.method.downcase else :get end public_send(method, response.location, **args) status end ``` Follow a single redirect response. If the last response was not a redirect, an exception will be raised. Otherwise, the redirect is performed on the location header. If the redirection is a 307 or 308 redirect, the same HTTP verb will be used when redirecting, otherwise a GET request will be performed. Any arguments are passed to the underlying request. get(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 15 def get(path, **args) process(:get, path, **args) end ``` Performs a GET request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. head(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 45 def head(path, **args) process(:head, path, **args) end ``` Performs a HEAD request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. options(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 51 def options(path, **args) process(:options, path, **args) end ``` Performs an OPTIONS request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. patch(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 27 def patch(path, **args) process(:patch, path, **args) end ``` Performs a PATCH request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. post(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 21 def post(path, **args) process(:post, path, **args) end ``` Performs a POST request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. put(path, \*\*args) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 33 def put(path, **args) process(:put, path, **args) end ``` Performs a PUT request with the given parameters. See [`ActionDispatch::Integration::Session#process`](session#method-i-process) for more details. rails module ActionDispatch::Integration::Runner module ActionDispatch::Integration::Runner =========================================== Included modules: ActionDispatch::Assertions APP\_SESSIONS app[R] new(\*args, &blk) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 324 def initialize(*args, &blk) super(*args, &blk) @integration_session = nil end ``` Calls superclass method create\_session(app) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 344 def create_session(app) klass = APP_SESSIONS[app] ||= Class.new(Integration::Session) { # If the app is a Rails app, make url_helpers available on the session. # This makes app.url_for and app.foo_path available in the console. if app.respond_to?(:routes) && app.routes.is_a?(ActionDispatch::Routing::RouteSet) include app.routes.url_helpers include app.routes.mounted_helpers end } klass.new(app) end ``` default\_url\_options() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 411 def default_url_options integration_session.default_url_options end ``` default\_url\_options=(options) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 415 def default_url_options=(options) integration_session.default_url_options = options end ``` integration\_session() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 334 def integration_session @integration_session ||= create_session(app) end ``` open\_session() { |session| ... } Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 387 def open_session dup.tap do |session| session.reset! session.root_session = self.root_session || self yield session if block_given? end end ``` Open a new session instance. If a block is given, the new session is yielded to the block before being returned. ``` session = open_session do |sess| sess.extend(CustomAssertions) end ``` By default, a single session is automatically created for you, but you can use this method to open multiple sessions that ought to be tested simultaneously. reset!() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 340 def reset! @integration_session = create_session(app) end ``` Reset the current session. This is useful for testing multiple sessions in a single test case. rails class ActionDispatch::Integration::Session class ActionDispatch::Integration::Session =========================================== Parent: [Object](../../object) Included modules: [ActionDispatch::Routing::UrlFor](../routing/urlfor) An instance of this class represents a set of requests and responses performed sequentially by a test process. Because you can instantiate multiple sessions and run them side-by-side, you can also mimic (to some limited extent) multiple simultaneous users interacting with your system. Typically, you will instantiate a new session using IntegrationTest#open\_session, rather than instantiating [`Integration::Session`](session) directly. DEFAULT\_HOST accept[RW] The Accept header to send. controller[R] A reference to the controller instance used by the last request. host[W] host![W] remote\_addr[RW] The [`remote_addr`](session#attribute-i-remote_addr) used in the last request. request[R] A reference to the request instance used by the last request. request\_count[RW] A running counter of the number of requests processed. response[R] A reference to the response instance used by the last request. new(app) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 126 def initialize(app) super() @app = app reset! end ``` Create and initialize a new [`Session`](session) instance. Calls superclass method [`ActionDispatch::Routing::UrlFor::new`](../routing/urlfor#method-c-new) cookies() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 107 def cookies _mock_session.cookie_jar end ``` A map of the cookies returned by the last response, and which will be sent with the next request. host() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 94 def host @host || DEFAULT_HOST end ``` The hostname used in the last request. https!(flag = true) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 174 def https!(flag = true) @https = flag end ``` Specify whether or not the session should mimic a secure HTTPS request. ``` session.https! session.https!(false) ``` https?() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 183 def https? @https end ``` Returns `true` if the session is mimicking a secure HTTPS request. ``` if session.https? ... end ``` process(method, path, params: nil, headers: nil, env: nil, xhr: false, as: nil) Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 220 def process(method, path, params: nil, headers: nil, env: nil, xhr: false, as: nil) request_encoder = RequestEncoder.encoder(as) headers ||= {} if method == :get && as == :json && params headers["X-Http-Method-Override"] = "GET" method = :post end if %r{://}.match?(path) path = build_expanded_path(path) do |location| https! URI::HTTPS === location if location.scheme if url_host = location.host default = Rack::Request::DEFAULT_PORTS[location.scheme] url_host += ":#{location.port}" if default != location.port host! url_host end end end hostname, port = host.split(":") request_env = { :method => method, :params => request_encoder.encode_params(params), "SERVER_NAME" => hostname, "SERVER_PORT" => port || (https? ? "443" : "80"), "HTTPS" => https? ? "on" : "off", "rack.url_scheme" => https? ? "https" : "http", "REQUEST_URI" => path, "HTTP_HOST" => host, "REMOTE_ADDR" => remote_addr, "CONTENT_TYPE" => request_encoder.content_type, "HTTP_ACCEPT" => request_encoder.accept_header || accept } wrapped_headers = Http::Headers.from_hash({}) wrapped_headers.merge!(headers) if headers if xhr wrapped_headers["HTTP_X_REQUESTED_WITH"] = "XMLHttpRequest" wrapped_headers["HTTP_ACCEPT"] ||= [Mime[:js], Mime[:html], Mime[:xml], "text/xml", "*/*"].join(", ") end # This modifies the passed request_env directly. if wrapped_headers.present? Http::Headers.from_hash(request_env).merge!(wrapped_headers) end if env.present? Http::Headers.from_hash(request_env).merge!(env) end session = Rack::Test::Session.new(_mock_session) # NOTE: rack-test v0.5 doesn't build a default uri correctly # Make sure requested path is always a full URI. session.request(build_full_uri(path, request_env), request_env) @request_count += 1 @request = ActionDispatch::Request.new(session.last_request.env) response = _mock_session.last_response @response = ActionDispatch::TestResponse.from_response(response) @response.request = @request @html_document = nil @url_options = nil @controller = @request.controller_instance response.status end ``` Performs the actual request. * `method`: The HTTP method (GET, POST, PATCH, PUT, DELETE, HEAD, OPTIONS) as a symbol. * `path`: The URI (as a [`String`](../../string)) on which you want to perform the request. * `params`: The HTTP parameters that you want to pass. This may be `nil`, a [`Hash`](../../hash), or a [`String`](../../string) that is appropriately encoded (`application/x-www-form-urlencoded` or `multipart/form-data`). * `headers`: Additional headers to pass, as a [`Hash`](../../hash). The headers will be merged into the Rack env hash. * `env`: Additional env to pass, as a [`Hash`](../../hash). The headers will be merged into the Rack env hash. * `xhr`: Set to `true` if you want to make an Ajax request. Adds request headers characteristic of XMLHttpRequest e.g. HTTP\_X\_REQUESTED\_WITH. The headers will be merged into the Rack env hash. * `as`: Used for encoding the request with different content type. Supports `:json` by default and will set the appropriate request headers. The headers will be merged into the Rack env hash. This method is rarely used directly. Use `#get`, `#post`, or other standard HTTP methods in integration tests. `#process` is only required when using a request method that doesn't have a method defined in the integration tests. This method returns the response status, after performing the request. Furthermore, if this method was called from an [`ActionDispatch::IntegrationTest`](../integrationtest) object, then that object's `@response` instance variable will point to a [`Response`](../response) object which one can use to inspect the details of the response. Example: ``` process :get, '/author', params: { since: 201501011400 } ``` reset!() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 150 def reset! @https = false @controller = @request = @response = nil @_mock_session = nil @request_count = 0 @url_options = nil self.host = DEFAULT_HOST self.remote_addr = "127.0.0.1" self.accept = "text/xml,application/xml,application/xhtml+xml," \ "text/html;q=0.9,text/plain;q=0.8,image/png," \ "*/*;q=0.5" unless defined? @named_routes_configured # the helpers are made protected by default--we make them public for # easier access during testing and troubleshooting. @named_routes_configured = true end end ``` Resets the instance. This can be used to reset the state information in an existing session instance, so it can be used from a clean-slate condition. ``` session.reset! ``` url\_options() Show source ``` # File actionpack/lib/action_dispatch/testing/integration.rb, line 133 def url_options @url_options ||= default_url_options.dup.tap do |url_options| url_options.reverse_merge!(controller.url_options) if controller.respond_to?(:url_options) if @app.respond_to?(:routes) url_options.reverse_merge!(@app.routes.default_url_options) end url_options.reverse_merge!(host: host, protocol: https? ? "https" : "http") end end ``` rails class ActionDispatch::RemoteIp::GetIp class ActionDispatch::RemoteIp::GetIp ====================================== Parent: [Object](../../object) The [`GetIp`](getip) class exists as a way to defer processing of the request data into an actual IP address. If the [`ActionDispatch::Request#remote_ip`](../request#method-i-remote_ip) method is called, this class will calculate the value and then memoize it. new(req, check\_ip, proxies) Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 100 def initialize(req, check_ip, proxies) @req = req @check_ip = check_ip @proxies = proxies end ``` calculate\_ip() Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 124 def calculate_ip # Set by the Rack web server, this is a single value. remote_addr = ips_from(@req.remote_addr).last # Could be a CSV list and/or repeated headers that were concatenated. client_ips = ips_from(@req.client_ip).reverse forwarded_ips = ips_from(@req.x_forwarded_for).reverse # +Client-Ip+ and +X-Forwarded-For+ should not, generally, both be set. # If they are both set, it means that either: # # 1) This request passed through two proxies with incompatible IP header # conventions. # 2) The client passed one of +Client-Ip+ or +X-Forwarded-For+ # (whichever the proxy servers weren't using) themselves. # # Either way, there is no way for us to determine which header is the # right one after the fact. Since we have no idea, if we are concerned # about IP spoofing we need to give up and explode. (If you're not # concerned about IP spoofing you can turn the +ip_spoofing_check+ # option off.) should_check_ip = @check_ip && client_ips.last && forwarded_ips.last if should_check_ip && !forwarded_ips.include?(client_ips.last) # We don't know which came from the proxy, and which from the user raise IpSpoofAttackError, "IP spoofing attack?! " \ "HTTP_CLIENT_IP=#{@req.client_ip.inspect} " \ "HTTP_X_FORWARDED_FOR=#{@req.x_forwarded_for.inspect}" end # We assume these things about the IP headers: # # - X-Forwarded-For will be a list of IPs, one per proxy, or blank # - Client-Ip is propagated from the outermost proxy, or is blank # - REMOTE_ADDR will be the IP that made the request to Rack ips = [forwarded_ips, client_ips].flatten.compact # If every single IP option is in the trusted list, return the IP # that's furthest away filter_proxies(ips + [remote_addr]).first || ips.last || remote_addr end ``` Sort through the various IP address headers, looking for the IP most likely to be the address of the actual remote client making this request. REMOTE\_ADDR will be correct if the request is made directly against the Ruby process, on e.g. Heroku. When the request is proxied by another server like HAProxy or NGINX, the IP address that made the original request will be put in an X-Forwarded-For header. If there are multiple proxies, that header may contain a list of IPs. Other proxy services set the Client-Ip header instead, so we check that too. As discussed in [this post about Rails IP Spoofing](https://blog.gingerlime.com/2012/rails-ip-spoofing-vulnerabilities-and-protection/), while the first IP in the list is likely to be the “originating” IP, it could also have been set by the client maliciously. In order to find the first address that is (probably) accurate, we take the list of IPs, remove known and trusted proxies, and then take the last address left, which was presumably set by one of those proxies. to\_s() Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 167 def to_s @ip ||= calculate_ip end ``` Memoizes the value returned by [`calculate_ip`](getip#method-i-calculate_ip) and returns it for [`ActionDispatch::Request`](../request) to use. filter\_proxies(ips) Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 186 def filter_proxies(ips) # :doc: ips.reject do |ip| @proxies.any? { |proxy| proxy === ip } end end ``` ips\_from(header) Show source ``` # File actionpack/lib/action_dispatch/middleware/remote_ip.rb, line 172 def ips_from(header) # :doc: return [] unless header # Split the comma-separated list into an array of strings. ips = header.strip.split(/[,\s]+/) ips.select do |ip| # Only return IPs that are valid according to the IPAddr#new method. range = IPAddr.new(ip).to_range # We want to make sure nobody is sneaking a netmask in. range.begin == range.end rescue ArgumentError nil end end ```
programming_docs
rails module ActionDispatch::Assertions::RoutingAssertions module ActionDispatch::Assertions::RoutingAssertions ===================================================== Suite of assertions to test routes generated by Rails and the handling of requests made to them. assert\_generates(expected\_path, options, defaults = {}, extras = {}, message = nil) Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/routing.rb, line 85 def assert_generates(expected_path, options, defaults = {}, extras = {}, message = nil) if %r{://}.match?(expected_path) fail_on(URI::InvalidURIError, message) do uri = URI.parse(expected_path) expected_path = uri.path.to_s.empty? ? "/" : uri.path end else expected_path = "/#{expected_path}" unless expected_path.start_with?("/") end options = options.clone generated_path, query_string_keys = @routes.generate_extras(options, defaults) found_extras = options.reject { |k, _| ! query_string_keys.include? k } msg = message || sprintf("found extras <%s>, not <%s>", found_extras, extras) assert_equal(extras, found_extras, msg) msg = message || sprintf("The generated path <%s> did not match <%s>", generated_path, expected_path) assert_equal(expected_path, generated_path, msg) end ``` Asserts that the provided options can be used to generate the provided path. This is the inverse of `assert_recognizes`. The `extras` parameter is used to tell the request the names and values of additional request parameters that would be in a query string. The `message` parameter allows you to specify a custom error message for assertion failures. The `defaults` parameter is unused. ``` # Asserts that the default action is generated for a route with no action assert_generates "/items", controller: "items", action: "index" # Tests that the list action is properly routed assert_generates "/items/list", controller: "items", action: "list" # Tests the generation of a route with a parameter assert_generates "/items/list/1", { controller: "items", action: "list", id: "1" } # Asserts that the generated route gives us our custom route assert_generates "changesets/12", { controller: 'scm', action: 'show_diff', revision: "12" } ``` assert\_recognizes(expected\_options, path, extras = {}, msg = nil) Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/routing.rb, line 47 def assert_recognizes(expected_options, path, extras = {}, msg = nil) if path.is_a?(Hash) && path[:method].to_s == "all" [:get, :post, :put, :delete].each do |method| assert_recognizes(expected_options, path.merge(method: method), extras, msg) end else request = recognized_request_for(path, extras, msg) expected_options = expected_options.clone expected_options.stringify_keys! msg = message(msg, "") { sprintf("The recognized options <%s> did not match <%s>, difference:", request.path_parameters, expected_options) } assert_equal(expected_options, request.path_parameters, msg) end end ``` Asserts that the routing of the given `path` was handled correctly and that the parsed options (given in the `expected_options` hash) match `path`. Basically, it asserts that Rails recognizes the route given by `expected_options`. Pass a hash in the second argument (`path`) to specify the request method. This is useful for routes requiring a specific HTTP method. The hash should contain a :path with the incoming request path and a :method containing the required HTTP verb. ``` # Asserts that POSTing to /items will call the create action on ItemsController assert_recognizes({controller: 'items', action: 'create'}, {path: 'items', method: :post}) ``` You can also pass in `extras` with a hash containing URL parameters that would normally be in the query string. This can be used to assert that values in the query string will end up in the params hash correctly. To test query strings you must use the extras argument because appending the query string on the path directly will not work. For example: ``` # Asserts that a path of '/items/list/1?view=print' returns the correct options assert_recognizes({controller: 'items', action: 'list', id: '1', view: 'print'}, 'items/list/1', { view: "print" }) ``` The `message` parameter allows you to pass in an error message that is displayed upon failure. ``` # Check the default route (i.e., the index action) assert_recognizes({controller: 'items', action: 'index'}, 'items') # Test a specific action assert_recognizes({controller: 'items', action: 'list'}, 'items/list') # Test an action with a parameter assert_recognizes({controller: 'items', action: 'destroy', id: '1'}, 'items/destroy/1') # Test a custom route assert_recognizes({controller: 'items', action: 'show', id: '1'}, 'view/item1') ``` assert\_routing(path, options, defaults = {}, extras = {}, message = nil) Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/routing.rb, line 128 def assert_routing(path, options, defaults = {}, extras = {}, message = nil) assert_recognizes(options, path, extras, message) controller, default_controller = options[:controller], defaults[:controller] if controller && controller.include?(?/) && default_controller && default_controller.include?(?/) options[:controller] = "/#{controller}" end generate_options = options.dup.delete_if { |k, _| defaults.key?(k) } assert_generates(path.is_a?(Hash) ? path[:path] : path, generate_options, defaults, extras, message) end ``` Asserts that path and options match both ways; in other words, it verifies that `path` generates `options` and then that `options` generates `path`. This essentially combines `assert_recognizes` and `assert_generates` into one step. The `extras` hash allows you to specify options that would normally be provided as a query string to the action. The `message` parameter allows you to specify a custom error message to display upon failure. ``` # Asserts a basic route: a controller with the default action (index) assert_routing '/home', controller: 'home', action: 'index' # Test a route generated with a specific controller, action, and parameter (id) assert_routing '/entries/show/23', controller: 'entries', action: 'show', id: 23 # Asserts a basic route (controller + default action), with an error message if it fails assert_routing '/store', { controller: 'store', action: 'index' }, {}, {}, 'Route for store index not generated properly' # Tests a route, providing a defaults hash assert_routing 'controller/action/9', {id: "9", item: "square"}, {controller: "controller", action: "action"}, {}, {item: "square"} # Tests a route with an HTTP method assert_routing({ method: 'put', path: '/product/321' }, { controller: "product", action: "update", id: "321" }) ``` method\_missing(selector, \*args, &block) Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/routing.rb, line 183 def method_missing(selector, *args, &block) if defined?(@controller) && @controller && defined?(@routes) && @routes && @routes.named_routes.route_defined?(selector) @controller.public_send(selector, *args, &block) else super end end ``` ROUTES TODO: These assertions should really work in an integration context Calls superclass method with\_routing() { |routes| ... } Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/routing.rb, line 153 def with_routing old_routes, @routes = @routes, ActionDispatch::Routing::RouteSet.new if defined?(@controller) && @controller old_controller, @controller = @controller, @controller.clone _routes = @routes @controller.singleton_class.include(_routes.url_helpers) if @controller.respond_to? :view_context_class view_context_class = Class.new(@controller.view_context_class) do include _routes.url_helpers end custom_view_context = Module.new { define_method(:view_context_class) do view_context_class end } @controller.extend(custom_view_context) end end yield @routes ensure @routes = old_routes if defined?(@controller) && @controller @controller = old_controller end end ``` A helper to make it easier to test different route configurations. This method temporarily replaces @routes with a new RouteSet instance. The new instance is yielded to the passed block. Typically the block will create some routes using `set.draw { match ... }`: ``` with_routing do |set| set.draw do resources :users end assert_equal "/users", users_path end ``` rails module ActionDispatch::Assertions::ResponseAssertions module ActionDispatch::Assertions::ResponseAssertions ====================================================== A small suite of assertions that test responses from Rails applications. assert\_redirected\_to(options = {}, message = nil) Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/response.rb, line 53 def assert_redirected_to(options = {}, message = nil) assert_response(:redirect, message) return true if options === @response.location redirect_is = normalize_argument_to_redirection(@response.location) redirect_expected = normalize_argument_to_redirection(options) message ||= "Expected response to be a redirect to <#{redirect_expected}> but was a redirect to <#{redirect_is}>" assert_operator redirect_expected, :===, redirect_is, message end ``` Asserts that the response is a redirect to a URL matching the given options. ``` # Asserts that the redirection was to the "index" action on the WeblogController assert_redirected_to controller: "weblog", action: "index" # Asserts that the redirection was to the named route login_url assert_redirected_to login_url # Asserts that the redirection was to the URL for @customer assert_redirected_to @customer # Asserts that the redirection matches the regular expression assert_redirected_to %r(\Ahttp://example.org) ``` assert\_response(type, message = nil) Show source ``` # File actionpack/lib/action_dispatch/testing/assertions/response.rb, line 30 def assert_response(type, message = nil) message ||= generate_response_message(type) if RESPONSE_PREDICATES.keys.include?(type) assert @response.public_send(RESPONSE_PREDICATES[type]), message else assert_equal AssertionResponse.new(type).code, @response.response_code, message end end ``` Asserts that the response is one of the following types: * `:success` - Status code was in the 200-299 range * `:redirect` - Status code was in the 300-399 range * `:missing` - Status code was 404 * `:error` - Status code was in the 500-599 range You can also pass an explicit status number like `assert_response(501)` or its symbolic equivalent `assert_response(:not_implemented)`. See Rack::Utils::SYMBOL\_TO\_STATUS\_CODE for a full list. ``` # Asserts that the response was a redirection assert_response :redirect # Asserts that the response code was status code 401 (unauthorized) assert_response 401 ``` rails module ActionDispatch::TestProcess::FixtureFile module ActionDispatch::TestProcess::FixtureFile ================================================ fixture\_file\_upload(path, mime\_type = nil, binary = false) Show source ``` # File actionpack/lib/action_dispatch/testing/test_process.rb, line 19 def fixture_file_upload(path, mime_type = nil, binary = false) if self.class.file_fixture_path && !File.exist?(path) path = file_fixture(path) end Rack::Test::UploadedFile.new(path, mime_type, binary) end ``` Shortcut for `Rack::Test::UploadedFile.new(File.join(ActionDispatch::IntegrationTest.file_fixture_path, path), type)`: ``` post :change_avatar, params: { avatar: fixture_file_upload('david.png', 'image/png') } ``` Default fixture files location is `test/fixtures/files`. To upload binary files on Windows, pass `:binary` as the last parameter. This will not affect other platforms: ``` post :change_avatar, params: { avatar: fixture_file_upload('david.png', 'image/png', :binary) } ``` rails module ActionDispatch::Flash::RequestMethods module ActionDispatch::Flash::RequestMethods ============================================= flash() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 47 def flash flash = flash_hash return flash if flash self.flash = Flash::FlashHash.from_session_value(session["flash"]) end ``` Access the contents of the flash. Use `flash["notice"]` to read a notice you put there or `flash["notice"] = "hello"` to put a new one. flash=(flash) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 53 def flash=(flash) set_header Flash::KEY, flash end ``` rails class ActionDispatch::Flash::FlashHash class ActionDispatch::Flash::FlashHash ======================================= Parent: [Object](../../object) Included modules: [Enumerable](../../enumerable) [](k) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 159 def [](k) @flashes[k.to_s] end ``` []=(k, v) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 153 def []=(k, v) k = k.to_s @discard.delete k @flashes[k] = v end ``` alert() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 260 def alert self[:alert] end ``` Convenience accessor for `flash[:alert]`. alert=(message) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 265 def alert=(message) self[:alert] = message end ``` Convenience accessor for `flash[:alert]=`. clear() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 192 def clear @discard.clear @flashes.clear end ``` delete(key) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 177 def delete(key) key = key.to_s @discard.delete key @flashes.delete key self end ``` discard(k = nil) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 245 def discard(k = nil) k = k.to_s if k @discard.merge Array(k || keys) k ? self[k] : self end ``` Marks the entire flash or a single flash entry to be discarded by the end of the current action: ``` flash.discard # discard the entire flash at the end of the current action flash.discard(:warning) # discard only the "warning" entry at the end of the current action ``` each(&block) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 197 def each(&block) @flashes.each(&block) end ``` empty?() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 188 def empty? @flashes.empty? end ``` initialize\_copy(other) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 145 def initialize_copy(other) if other.now_is_loaded? @now = other.now.dup @now.flash = self end super end ``` Calls superclass method keep(k = nil) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 235 def keep(k = nil) k = k.to_s if k @discard.subtract Array(k || keys) k ? self[k] : self end ``` Keeps either the entire current flash or a specific flash entry available for the next action: ``` flash.keep # keeps the entire flash flash.keep(:notice) # keeps only the "notice" entry, the rest of the flash is discarded ``` key?(name) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 173 def key?(name) @flashes.key? name.to_s end ``` keys() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 169 def keys @flashes.keys end ``` notice() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 270 def notice self[:notice] end ``` Convenience accessor for `flash[:notice]`. notice=(message) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 275 def notice=(message) self[:notice] = message end ``` Convenience accessor for `flash[:notice]=`. now() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 227 def now @now ||= FlashNow.new(self) end ``` Sets a flash that will not be available to the next action, only to the current. ``` flash.now[:message] = "Hello current action" ``` This method enables you to use the flash as a central messaging system in your app. When you need to pass an object to the next action, you use the standard flash assign (`[]=`). When you need to pass an object to the current action, you use `now`, and your object will vanish when the current action is done. Entries set via `now` are accessed the same way as standard entries: `flash['my-key']`. Also, brings two convenience accessors: ``` flash.now.alert = "Beware now!" # Equivalent to flash.now[:alert] = "Beware now!" flash.now.notice = "Good luck now!" # Equivalent to flash.now[:notice] = "Good luck now!" ``` to\_hash() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 184 def to_hash @flashes.dup end ``` now\_is\_loaded?() Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 280 def now_is_loaded? @now end ``` stringify\_array(array) Show source ``` # File actionpack/lib/action_dispatch/middleware/flash.rb, line 285 def stringify_array(array) # :doc: array.map do |item| item.kind_of?(Symbol) ? item.to_s : item end end ``` rails class ActionDispatch::MiddlewareStack::InstrumentationProxy class ActionDispatch::MiddlewareStack::InstrumentationProxy ============================================================ Parent: [Object](../../object) This class is used to instrument the execution of a single middleware. It proxies the `call` method transparently and instruments the method call. EVENT\_NAME new(middleware, class\_name) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 51 def initialize(middleware, class_name) @middleware = middleware @payload = { middleware: class_name, } end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/middleware/stack.rb, line 59 def call(env) ActiveSupport::Notifications.instrument(EVENT_NAME, @payload) do @middleware.call(env) end end ``` rails module ActionDispatch::Http::MimeNegotiation module ActionDispatch::Http::MimeNegotiation ============================================= BROWSER\_LIKE\_ACCEPTS We use normal content negotiation unless you include **/** in your list, in which case we assume you're a browser and send HTML. RESCUABLE\_MIME\_FORMAT\_ERRORS accepts() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 54 def accepts fetch_header("action_dispatch.request.accepts") do |k| header = get_header("HTTP_ACCEPT").to_s.strip v = if header.empty? [content_mime_type] else Mime::Type.parse(header) end set_header k, v rescue ::Mime::Type::InvalidMimeType => e raise InvalidType, e.message end end ``` Returns the accepted MIME type for the request. content\_mime\_type() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 23 def content_mime_type fetch_header("action_dispatch.request.content_type") do |k| v = if get_header("CONTENT_TYPE") =~ /^([^,;]*)/ Mime::Type.lookup($1.strip.downcase) else nil end set_header k, v rescue ::Mime::Type::InvalidMimeType => e raise InvalidType, e.message end end ``` The MIME type of the HTTP request, such as [Mime](#). content\_type() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 36 def content_type if self.class.return_only_media_type_on_content_type ActiveSupport::Deprecation.warn( "Rails 7.1 will return Content-Type header without modification." \ " If you want just the MIME type, please use `#media_type` instead." ) content_mime_type&.to_s else super end end ``` Calls superclass method format(view\_path = []) Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 75 def format(view_path = []) formats.first || Mime::NullType.instance end ``` Returns the MIME type for the format used in the request. ``` GET /posts/5.xml | request.format => Mime[:xml] GET /posts/5.xhtml | request.format => Mime[:html] GET /posts/5 | request.format => Mime[:html] or Mime[:js], or request.accepts.first ``` format=(extension) Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 127 def format=(extension) parameters[:format] = extension.to_s set_header "action_dispatch.request.formats", [Mime::Type.lookup_by_extension(parameters[:format])] end ``` Sets the format by string extension, which can be used to force custom formats that are not controlled by the extension. ``` class ApplicationController < ActionController::Base before_action :adjust_format_for_iphone private def adjust_format_for_iphone request.format = :iphone if request.env["HTTP_USER_AGENT"][/iPhone/] end end ``` formats() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 79 def formats fetch_header("action_dispatch.request.formats") do |k| v = if params_readable? Array(Mime[parameters[:format]]) elsif use_accept_header && valid_accept_header accepts elsif extension_format = format_from_path_extension [extension_format] elsif xhr? [Mime[:js]] else [Mime[:html]] end v = v.select do |format| format.symbol || format.ref == "*/*" end set_header k, v end end ``` formats=(extensions) Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 146 def formats=(extensions) parameters[:format] = extensions.first.to_s set_header "action_dispatch.request.formats", extensions.collect { |extension| Mime::Type.lookup_by_extension(extension) } end ``` Sets the formats by string extensions. This differs from [`format=`](mimenegotiation#method-i-format-3D) by allowing you to set multiple, ordered formats, which is useful when you want to have a fallback. In this example, the :iphone format will be used if it's available, otherwise it'll fallback to the :html format. ``` class ApplicationController < ActionController::Base before_action :adjust_format_for_iphone_with_html_fallback private def adjust_format_for_iphone_with_html_fallback request.formats = [ :iphone, :html ] if request.env["HTTP_USER_AGENT"][/iPhone/] end end ``` negotiate\_mime(order) Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 154 def negotiate_mime(order) formats.each do |priority| if priority == Mime::ALL return order.first elsif order.include?(priority) return priority end end order.include?(Mime::ALL) ? format : nil end ``` Returns the first MIME type that matches the provided array of MIME types. should\_apply\_vary\_header?() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 166 def should_apply_vary_header? !params_readable? && use_accept_header && valid_accept_header end ``` variant() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 112 def variant @variant ||= ActiveSupport::ArrayInquirer.new end ``` variant=(variant) Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 102 def variant=(variant) variant = Array(variant) if variant.all?(Symbol) @variant = ActiveSupport::ArrayInquirer.new(variant) else raise ArgumentError, "request.variant must be set to a Symbol or an Array of Symbols." end end ``` Sets the variant for template. format\_from\_path\_extension() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 190 def format_from_path_extension # :doc: path = get_header("action_dispatch.original_path") || get_header("PATH_INFO") if match = path && path.match(/\.(\w+)\z/) Mime[match.captures.first] end end ``` params\_readable?() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 175 def params_readable? # :doc: parameters[:format] rescue *RESCUABLE_MIME_FORMAT_ERRORS false end ``` use\_accept\_header() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 186 def use_accept_header # :doc: !self.class.ignore_accept_header end ``` valid\_accept\_header() Show source ``` # File actionpack/lib/action_dispatch/http/mime_negotiation.rb, line 181 def valid_accept_header # :doc: (xhr? && (accept.present? || content_mime_type)) || (accept.present? && !accept.match?(BROWSER_LIKE_ACCEPTS)) end ```
programming_docs
rails class ActionDispatch::Http::UploadedFile class ActionDispatch::Http::UploadedFile ========================================= Parent: [Object](../../object) Models uploaded files. The actual file is accessible via the `tempfile` accessor, though some of its interface is available directly for convenience. Uploaded files are temporary files whose lifespan is one request. When the object is finalized Ruby unlinks the file, so there is no need to clean them with a separate maintenance task. content\_type[RW] A string with the MIME type of the file. headers[RW] A string with the headers of the multipart request. original\_filename[RW] The basename of the file in the client. tempfile[RW] A `Tempfile` object with the actual uploaded file. Note that some of its interface is available directly. close(unlink\_now = false) Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 58 def close(unlink_now = false) @tempfile.close(unlink_now) end ``` Shortcut for `tempfile.close`. eof?() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 83 def eof? @tempfile.eof? end ``` Shortcut for `tempfile.eof?`. open() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 53 def open @tempfile.open end ``` Shortcut for `tempfile.open`. path() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 63 def path @tempfile.path end ``` Shortcut for `tempfile.path`. read(length = nil, buffer = nil) Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 48 def read(length = nil, buffer = nil) @tempfile.read(length, buffer) end ``` Shortcut for `tempfile.read`. rewind() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 73 def rewind @tempfile.rewind end ``` Shortcut for `tempfile.rewind`. size() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 78 def size @tempfile.size end ``` Shortcut for `tempfile.size`. to\_io() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 87 def to_io @tempfile.to_io end ``` to\_path() Show source ``` # File actionpack/lib/action_dispatch/http/upload.rb, line 68 def to_path @tempfile.to_path end ``` Shortcut for `tempfile.to_path`. rails module ActionDispatch::Http::URL module ActionDispatch::Http::URL ================================= HOST\_REGEXP IP\_HOST\_REGEXP PROTOCOL\_REGEXP extract\_domain(host, tld\_length) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 22 def extract_domain(host, tld_length) extract_domain_from(host, tld_length) if named_host?(host) end ``` Returns the domain part of a host given the domain level. ``` # Top-level domain example extract_domain('www.example.com', 1) # => "example.com" # Second-level domain example extract_domain('dev.www.example.co.uk', 2) # => "example.co.uk" ``` extract\_subdomain(host, tld\_length) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 46 def extract_subdomain(host, tld_length) extract_subdomains(host, tld_length).join(".") end ``` Returns the subdomains of a host as a [`String`](../../string) given the domain level. ``` # Top-level domain example extract_subdomain('www.example.com', 1) # => "www" # Second-level domain example extract_subdomain('dev.www.example.co.uk', 2) # => "dev.www" ``` extract\_subdomains(host, tld\_length) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 32 def extract_subdomains(host, tld_length) if named_host?(host) extract_subdomains_from(host, tld_length) else [] end end ``` Returns the subdomains of a host as an [`Array`](../../array) given the domain level. ``` # Top-level domain example extract_subdomains('www.example.com', 1) # => ["www"] # Second-level domain example extract_subdomains('dev.www.example.co.uk', 2) # => ["dev", "www"] ``` full\_url\_for(options) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 58 def full_url_for(options) host = options[:host] protocol = options[:protocol] port = options[:port] unless host raise ArgumentError, "Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true" end build_host_url(host, port, protocol, options, path_for(options)) end ``` new() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 179 def initialize super @protocol = nil @port = nil end ``` Calls superclass method path\_for(options) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 70 def path_for(options) path = options[:script_name].to_s.chomp("/") path << options[:path] if options.key?(:path) path = "/" if options[:trailing_slash] && path.blank? add_params(path, options[:params]) if options.key?(:params) add_anchor(path, options[:anchor]) if options.key?(:anchor) path end ``` url\_for(options) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 50 def url_for(options) if options[:only_path] path_for options else full_url_for options end end ``` domain(tld\_length = @@tld\_length) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 321 def domain(tld_length = @@tld_length) ActionDispatch::Http::URL.extract_domain(host, tld_length) end ``` Returns the domain part of a host, such as “rubyonrails.org” in “www.rubyonrails.org”. You can specify a different `tld_length`, such as 2 to catch rubyonrails.co.uk in “www.rubyonrails.co.uk”. host() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 226 def host raw_host_with_port.sub(/:\d+$/, "") end ``` Returns the host for this request, such as “example.com”. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.host # => "example.com" ``` host\_with\_port() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 242 def host_with_port "#{host}#{port_string}" end ``` Returns a host:port string for this request, such as “example.com” or “example.com:8080”. Port is only included if it is not a default port (80 or 443) ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com' req.host_with_port # => "example.com" req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:80' req.host_with_port # => "example.com" req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.host_with_port # => "example.com:8080" ``` optional\_port() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 292 def optional_port standard_port? ? nil : port end ``` Returns a number port suffix like 8080 if the port number of this request is not the default HTTP port 80 or HTTPS port 443. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:80' req.optional_port # => nil req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.optional_port # => 8080 ``` port() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 253 def port @port ||= if raw_host_with_port =~ /:(\d+)$/ $1.to_i else standard_port end end ``` Returns the port number of this request as an integer. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com' req.port # => 80 req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.port # => 8080 ``` port\_string() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 304 def port_string standard_port? ? "" : ":#{port}" end ``` Returns a string port suffix, including colon, like “:8080” if the port number of this request is not the default HTTP port 80 or HTTPS port 443. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:80' req.port_string # => "" req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.port_string # => ":8080" ``` protocol() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 200 def protocol @protocol ||= ssl? ? "https://" : "http://" end ``` Returns 'https://' if this is an [`SSL`](../ssl) request and 'http://' otherwise. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com' req.protocol # => "http://" req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com', 'HTTPS' => 'on' req.protocol # => "https://" ``` raw\_host\_with\_port() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 214 def raw_host_with_port if forwarded = x_forwarded_host.presence forwarded.split(/,\s?/).last else get_header("HTTP_HOST") || "#{server_name}:#{get_header('SERVER_PORT')}" end end ``` Returns the host and port for this request, such as “example.com:8080”. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com' req.raw_host_with_port # => "example.com" req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:80' req.raw_host_with_port # => "example.com:80" req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.raw_host_with_port # => "example.com:8080" ``` server\_port() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 315 def server_port get_header("SERVER_PORT").to_i end ``` Returns the requested port, such as 8080, based on SERVER\_PORT ``` req = ActionDispatch::Request.new 'SERVER_PORT' => '80' req.server_port # => 80 req = ActionDispatch::Request.new 'SERVER_PORT' => '8080' req.server_port # => 8080 ``` standard\_port() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 265 def standard_port if "https://" == protocol 443 else 80 end end ``` Returns the standard port number for this request's protocol. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.standard_port # => 80 ``` standard\_port?() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 280 def standard_port? port == standard_port end ``` Returns whether this request is using the standard port ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:80' req.standard_port? # => true req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com:8080' req.standard_port? # => false ``` subdomain(tld\_length = @@tld\_length) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 337 def subdomain(tld_length = @@tld_length) ActionDispatch::Http::URL.extract_subdomain(host, tld_length) end ``` Returns all the subdomains as a string, so `"dev.www"` would be returned for “dev.www.rubyonrails.org”. You can specify a different `tld_length`, such as 2 to catch `"www"` instead of `"www.rubyonrails"` in “www.rubyonrails.co.uk”. subdomains(tld\_length = @@tld\_length) Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 329 def subdomains(tld_length = @@tld_length) ActionDispatch::Http::URL.extract_subdomains(host, tld_length) end ``` Returns all the subdomains as an array, so `["dev", "www"]` would be returned for “dev.www.rubyonrails.org”. You can specify a different `tld_length`, such as 2 to catch `["www"]` instead of `["www", "rubyonrails"]` in “www.rubyonrails.co.uk”. url() Show source ``` # File actionpack/lib/action_dispatch/http/url.rb, line 189 def url protocol + host_with_port + fullpath end ``` Returns the complete [`URL`](url) used for this request. ``` req = ActionDispatch::Request.new 'HTTP_HOST' => 'example.com' req.url # => "http://example.com" ``` rails class ActionDispatch::Http::Headers class ActionDispatch::Http::Headers ==================================== Parent: [Object](../../object) Included modules: [Enumerable](../../enumerable) Provides access to the request's HTTP headers from the environment. ``` env = { "CONTENT_TYPE" => "text/plain", "HTTP_USER_AGENT" => "curl/7.43.0" } headers = ActionDispatch::Http::Headers.from_hash(env) headers["Content-Type"] # => "text/plain" headers["User-Agent"] # => "curl/7.43.0" ``` Also note that when headers are mapped to CGI-like variables by the Rack server, both dashes and underscores are converted to underscores. This ambiguity cannot be resolved at this stage anymore. Both underscores and dashes have to be interpreted as if they were originally sent as dashes. ``` # GET / HTTP/1.1 # ... # User-Agent: curl/7.43.0 # X_Custom_Header: token headers["X_Custom_Header"] # => nil headers["X-Custom-Header"] # => "token" ``` CGI\_VARIABLES HTTP\_HEADER from\_hash(hash) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 50 def self.from_hash(hash) new ActionDispatch::Request.new hash end ``` [](key) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 59 def [](key) @req.get_header env_name(key) end ``` Returns the value for the given key mapped to @env. []=(key, value) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 64 def []=(key, value) @req.set_header env_name(key), value end ``` Sets the given value for the key mapped to @env. add(key, value) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 69 def add(key, value) @req.add_header env_name(key), value end ``` Add a value to a multivalued header like Vary or Accept-Encoding. each(&block) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 95 def each(&block) @req.each_header(&block) end ``` env() Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 116 def env; @req.env.dup; end ``` fetch(key, default = DEFAULT) { || ... } Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 87 def fetch(key, default = DEFAULT) @req.fetch_header(env_name(key)) do return default unless default == DEFAULT return yield if block_given? raise KeyError, key end end ``` Returns the value for the given key mapped to @env. If the key is not found and an optional code block is not provided, raises a `KeyError` exception. If the code block is provided, then it will be run and its result returned. include?(key) Alias for: [key?](headers#method-i-key-3F) key?(key) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 73 def key?(key) @req.has_header? env_name(key) end ``` Also aliased as: [include?](headers#method-i-include-3F) merge(headers\_or\_env) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 101 def merge(headers_or_env) headers = @req.dup.headers headers.merge!(headers_or_env) headers end ``` Returns a new [`Http::Headers`](headers) instance containing the contents of `headers_or_env` and the original instance. merge!(headers\_or\_env) Show source ``` # File actionpack/lib/action_dispatch/http/headers.rb, line 110 def merge!(headers_or_env) headers_or_env.each do |key, value| @req.set_header env_name(key), value end end ``` Adds the contents of `headers_or_env` to original instance entries; duplicate keys are overwritten with the values from `headers_or_env`. rails module ActionDispatch::Http::Parameters module ActionDispatch::Http::Parameters ======================================== DEFAULT\_PARSERS PARAMETERS\_KEY parameter\_parsers[R] Returns the parameter parsers. parameters() Show source ``` # File actionpack/lib/action_dispatch/http/parameters.rb, line 50 def parameters params = get_header("action_dispatch.request.parameters") return params if params params = begin request_parameters.merge(query_parameters) rescue EOFError query_parameters.dup end params.merge!(path_parameters) set_header("action_dispatch.request.parameters", params) params end ``` Returns both GET and POST parameters in a single hash. Also aliased as: [params](parameters#method-i-params) params() Alias for: [parameters](parameters#method-i-parameters) path\_parameters() Show source ``` # File actionpack/lib/action_dispatch/http/parameters.rb, line 82 def path_parameters get_header(PARAMETERS_KEY) || set_header(PARAMETERS_KEY, {}) end ``` Returns a hash with the parameters used to form the path of the request. Returned hash keys are strings: ``` { action: "my_action", controller: "my_controller" } ``` rails module ActionDispatch::Http::FilterParameters module ActionDispatch::Http::FilterParameters ============================================== Allows you to specify sensitive parameters which will be replaced from the request log by looking in the query string of the request and all sub-hashes of the params hash to filter. Filtering only certain sub-keys from a hash is possible by using the dot notation: 'credit\_card.number'. If a block is given, each key and value of the params hash and all sub-hashes are passed to it, where the value or the key can be replaced using String#replace or similar methods. ``` env["action_dispatch.parameter_filter"] = [:password] => replaces the value to all keys matching /password/i with "[FILTERED]" env["action_dispatch.parameter_filter"] = [:foo, "bar"] => replaces the value to all keys matching /foo|bar/i with "[FILTERED]" env["action_dispatch.parameter_filter"] = [ /\Apin\z/i, /\Apin_/i ] => replaces the value for the exact (case-insensitive) key 'pin' and all (case-insensitive) keys beginning with 'pin_', with "[FILTERED]" Does not match keys with 'pin' as a substring, such as 'shipping_id'. env["action_dispatch.parameter_filter"] = [ "credit_card.code" ] => replaces { credit_card: {code: "xxxx"} } with "[FILTERED]", does not change { file: { code: "xxxx"} } env["action_dispatch.parameter_filter"] = -> (k, v) do v.reverse! if k.match?(/secret/i) end => reverses the value to all keys matching /secret/i ``` KV\_RE PAIR\_RE new() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 39 def initialize super @filtered_parameters = nil @filtered_env = nil @filtered_path = nil end ``` Calls superclass method filtered\_env() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 54 def filtered_env @filtered_env ||= env_filter.filter(@env) end ``` Returns a hash of request.env with all sensitive data replaced. filtered\_parameters() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 47 def filtered_parameters @filtered_parameters ||= parameter_filter.filter(parameters) rescue ActionDispatch::Http::Parameters::ParseError @filtered_parameters = {} end ``` Returns a hash of parameters with all sensitive data replaced. filtered\_path() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 59 def filtered_path @filtered_path ||= query_string.empty? ? path : "#{path}?#{filtered_query_string}" end ``` Reconstructs a path with all sensitive GET parameters replaced. env\_filter() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 70 def env_filter # :doc: user_key = fetch_header("action_dispatch.parameter_filter") { return NULL_ENV_FILTER } parameter_filter_for(Array(user_key) + ENV_MATCH) end ``` filtered\_query\_string() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 83 def filtered_query_string # :doc: query_string.gsub(PAIR_RE) do |_| parameter_filter.filter($1 => $2).first.join("=") end end ``` parameter\_filter() Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 64 def parameter_filter # :doc: parameter_filter_for fetch_header("action_dispatch.parameter_filter") { return NULL_PARAM_FILTER } end ``` parameter\_filter\_for(filters) Show source ``` # File actionpack/lib/action_dispatch/http/filter_parameters.rb, line 77 def parameter_filter_for(filters) # :doc: ActiveSupport::ParameterFilter.new(filters) end ```
programming_docs
rails module ActionDispatch::Http::Cache::Response module ActionDispatch::Http::Cache::Response ============================================= DATE DEFAULT\_CACHE\_CONTROL LAST\_MODIFIED MUST\_REVALIDATE NO\_CACHE NO\_STORE PRIVATE PUBLIC SPECIAL\_KEYS cache\_control[R] date() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 68 def date if date_header = get_header(DATE) Time.httpdate(date_header) end end ``` date=(utc\_time) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 78 def date=(utc_time) set_header DATE, utc_time.httpdate end ``` date?() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 74 def date? has_header? DATE end ``` etag=(weak\_validators) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 101 def etag=(weak_validators) self.weak_etag = weak_validators end ``` This method sets a weak ETag validator on the response so browsers and proxies may cache the response, keyed on the ETag. On subsequent requests, the If-None-Match header is set to the cached ETag. If it matches the current ETag, we can return a 304 Not Modified response with no body, letting the browser or proxy know that their cache is current. Big savings in request time and network bandwidth. Weak ETags are considered to be semantically equivalent but not byte-for-byte identical. This is perfect for browser caching of HTML pages where we don't care about exact equality, just what the user is viewing. Strong ETags are considered byte-for-byte identical. They allow a browser or proxy cache to support [`Range`](../../../range) requests, useful for paging through a PDF file or scrubbing through a video. Some CDNs only support strong ETags and will ignore weak ETags entirely. Weak ETags are what we almost always need, so they're the default. Check out [`strong_etag=`](response#method-i-strong_etag-3D) to provide a strong ETag validator. etag?() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 113 def etag?; etag; end ``` last\_modified() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 54 def last_modified if last = get_header(LAST_MODIFIED) Time.httpdate(last) end end ``` last\_modified=(utc\_time) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 64 def last_modified=(utc_time) set_header LAST_MODIFIED, utc_time.httpdate end ``` last\_modified?() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 60 def last_modified? has_header? LAST_MODIFIED end ``` strong\_etag=(strong\_validators) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 109 def strong_etag=(strong_validators) set_header "ETag", generate_strong_etag(strong_validators) end ``` strong\_etag?() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 121 def strong_etag? etag? && !weak_etag? end ``` True if an ETag is set and it isn't a weak validator (not preceded with W/) weak\_etag=(weak\_validators) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 105 def weak_etag=(weak_validators) set_header "ETag", generate_weak_etag(weak_validators) end ``` weak\_etag?() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 116 def weak_etag? etag? && etag.start_with?('W/"') end ``` True if an ETag is set and it's a weak validator (preceded with W/) rails module ActionDispatch::Http::Cache::Request module ActionDispatch::Http::Cache::Request ============================================ HTTP\_IF\_MODIFIED\_SINCE HTTP\_IF\_NONE\_MATCH etag\_matches?(etag) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 28 def etag_matches?(etag) if etag validators = if_none_match_etags validators.include?(etag) || validators.include?("*") end end ``` fresh?(response) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 38 def fresh?(response) last_modified = if_modified_since etag = if_none_match return false unless last_modified || etag success = true success &&= not_modified?(response.last_modified) if last_modified success &&= etag_matches?(response.etag) if etag success end ``` Check response freshness (Last-Modified and ETag) against request If-Modified-Since and If-None-Match conditions. If both headers are supplied, both must match, or the request is not considered fresh. if\_modified\_since() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 10 def if_modified_since if since = get_header(HTTP_IF_MODIFIED_SINCE) Time.rfc2822(since) rescue nil end end ``` if\_none\_match() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 16 def if_none_match get_header HTTP_IF_NONE_MATCH end ``` if\_none\_match\_etags() Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 20 def if_none_match_etags if_none_match ? if_none_match.split(/\s*,\s*/) : [] end ``` not\_modified?(modified\_at) Show source ``` # File actionpack/lib/action_dispatch/http/cache.rb, line 24 def not_modified?(modified_at) if_modified_since && modified_at && if_modified_since >= modified_at end ``` rails module ActionDispatch::Http::Parameters::ClassMethods module ActionDispatch::Http::Parameters::ClassMethods ====================================================== parameter\_parsers=(parsers) Show source ``` # File actionpack/lib/action_dispatch/http/parameters.rb, line 44 def parameter_parsers=(parsers) @parameter_parsers = parsers.transform_keys { |key| key.respond_to?(:symbol) ? key.symbol : key } end ``` Configure the parameter parser for a given MIME type. It accepts a hash where the key is the symbol of the MIME type and the value is a proc. ``` original_parsers = ActionDispatch::Request.parameter_parsers xml_parser = -> (raw_post) { Hash.from_xml(raw_post) || {} } new_parsers = original_parsers.merge(xml: xml_parser) ActionDispatch::Request.parameter_parsers = new_parsers ``` rails class ActionDispatch::Http::Parameters::ParseError class ActionDispatch::Http::Parameters::ParseError =================================================== Parent: StandardError Raised when raw data from the request cannot be parsed by the parser defined for request's content MIME type. new(message = $!.message) Show source ``` # File actionpack/lib/action_dispatch/http/parameters.rb, line 20 def initialize(message = $!.message) super(message) end ``` Calls superclass method rails module ActionDispatch::Routing::Redirection module ActionDispatch::Routing::Redirection ============================================ redirect(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/redirection.rb, line 185 def redirect(*args, &block) options = args.extract_options! status = options.delete(:status) || 301 path = args.shift return OptionRedirect.new(status, options) if options.any? return PathRedirect.new(status, path) if String === path block = path if path.respond_to? :call raise ArgumentError, "redirection argument not supported" unless block Redirect.new status, block end ``` Redirect any path to another path: ``` get "/stories" => redirect("/posts") ``` This will redirect the user, while ignoring certain parts of the request, including query string, etc. `/stories`, `/stories?foo=bar`, etc all redirect to `/posts`. You can also use interpolation in the supplied redirect argument: ``` get 'docs/:article', to: redirect('/wiki/%{article}') ``` Note that if you return a path without a leading slash then the URL is prefixed with the current SCRIPT\_NAME environment variable. This is typically '/' but may be different in a mounted engine or where the application is deployed to a subdirectory of a website. Alternatively you can use one of the other syntaxes: The block version of redirect allows for the easy encapsulation of any logic associated with the redirect in question. Either the params and request are supplied as arguments, or just params, depending of how many arguments your block accepts. A string is required as a return value. ``` get 'jokes/:number', to: redirect { |params, request| path = (params[:number].to_i.even? ? "wheres-the-beef" : "i-love-lamp") "http://#{request.host_with_port}/#{path}" } ``` Note that the `do end` syntax for the redirect block wouldn't work, as Ruby would pass the block to `get` instead of `redirect`. Use `{ ... }` instead. The options version of redirect allows you to supply only the parts of the URL which need to change, it also supports interpolation of the path similar to the first example. ``` get 'stores/:name', to: redirect(subdomain: 'stores', path: '/%{name}') get 'stores/:name(*all)', to: redirect(subdomain: 'stores', path: '/%{name}%{all}') get '/stories', to: redirect(path: '/posts') ``` This will redirect the user, while changing only the specified parts of the request, for example the `path` option in the last example. `/stories`, `/stories?foo=bar`, redirect to `/posts` and `/posts?foo=bar` respectively. Finally, an object which responds to call can be supplied to redirect, allowing you to reuse common redirect routes. The call method must accept two arguments, params and request, and return a string. ``` get 'accounts/:name' => redirect(SubdomainRedirector.new('api')) ``` rails class ActionDispatch::Routing::Mapper class ActionDispatch::Routing::Mapper ====================================== Parent: [Object](../../object) Included modules: [ActionDispatch::Routing::Mapper::Base](mapper/base), [ActionDispatch::Routing::Mapper::HttpHelpers](mapper/httphelpers), [ActionDispatch::Routing::Redirection](redirection), [ActionDispatch::Routing::Mapper::Scoping](mapper/scoping), [ActionDispatch::Routing::Mapper::Concerns](mapper/concerns), [ActionDispatch::Routing::Mapper::Resources](mapper/resources), [ActionDispatch::Routing::Mapper::CustomUrls](mapper/customurls) URL\_OPTIONS normalize\_name(name) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 381 def self.normalize_name(name) normalize_path(name)[1..-1].tr("/", "_") end ``` normalize\_path(path) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 364 def self.normalize_path(path) path = Journey::Router::Utils.normalize_path(path) # the path for a root URL at this point can be something like # "/(/:locale)(/:platform)/(:browser)", and we would want # "/(:locale)(/:platform)(/:browser)" # reverse "/(", "/((" etc to "(/", "((/" etc path.gsub!(%r{/(\(+)/?}, '\1/') # if a path is all optional segments, change the leading "(/" back to # "/(" so it evaluates to "/" when interpreted with no options. # Unless, however, at least one secondary segment consists of a static # part, ex. "(/:locale)(/pages/:page)" path.sub!(%r{^(\(+)/}, '/\1') if %r{^(\(+[^)]+\))(\(+/:[^)]+\))*$}.match?(path) path end ``` Invokes Journey::Router::Utils.normalize\_path, then ensures that /(:locale) becomes (/:locale). Except for root cases, where the former is the correct one. rails module ActionDispatch::Routing::PolymorphicRoutes module ActionDispatch::Routing::PolymorphicRoutes ================================================== Polymorphic URL helpers are methods for smart resolution to a named route call when given an Active Record model instance. They are to be used in combination with ActionController::Resources. These methods are useful when you want to generate the correct URL or path to a RESTful resource without having to know the exact type of the record in question. Nested resources and/or namespaces are also supported, as illustrated in the example: ``` polymorphic_url([:admin, @article, @comment]) ``` results in: ``` admin_article_comment_url(@article, @comment) ``` Usage within the framework -------------------------- Polymorphic URL helpers are used in a number of places throughout the Rails framework: * `url_for`, so you can use it with a record as the argument, e.g. `url_for(@article)`; * [`ActionView::Helpers::FormHelper`](../../actionview/helpers/formhelper) uses `polymorphic_path`, so you can write `form_for(@article)` without having to specify `:url` parameter for the form action; * `redirect_to` (which, in fact, uses `url_for`) so you can write `redirect_to(post)` in your controllers; * [`ActionView::Helpers::AtomFeedHelper`](../../actionview/helpers/atomfeedhelper), so you don't have to explicitly specify URLs for feed entries. Prefixed polymorphic helpers ---------------------------- In addition to `polymorphic_url` and `polymorphic_path` methods, a number of prefixed helpers are available as a shorthand to `action: "..."` in options. Those are: * `edit_polymorphic_url`, `edit_polymorphic_path` * `new_polymorphic_url`, `new_polymorphic_path` Example usage: ``` edit_polymorphic_path(@post) # => "/posts/1/edit" polymorphic_path(@post, format: :pdf) # => "/posts/1.pdf" ``` Usage with mounted engines -------------------------- If you are using a mounted engine and you need to use a [`polymorphic_url`](polymorphicroutes#method-i-polymorphic_url) pointing at the engine's routes, pass in the engine's route proxy as the first argument to the method. For example: ``` polymorphic_url([blog, @post]) # calls blog.post_path(@post) form_for([blog, @post]) # => "/blog/posts/1" ``` polymorphic\_path(record\_or\_hash\_or\_array, options = {}) Show source ``` # File actionpack/lib/action_dispatch/routing/polymorphic_routes.rb, line 124 def polymorphic_path(record_or_hash_or_array, options = {}) if Hash === record_or_hash_or_array options = record_or_hash_or_array.merge(options) record = options.delete :id return polymorphic_path record, options end if mapping = polymorphic_mapping(record_or_hash_or_array) return mapping.call(self, [record_or_hash_or_array, options], true) end opts = options.dup action = opts.delete :action type = :path HelperMethodBuilder.polymorphic_method self, record_or_hash_or_array, action, type, opts end ``` Returns the path component of a URL for the given record. polymorphic\_url(record\_or\_hash\_or\_array, options = {}) Show source ``` # File actionpack/lib/action_dispatch/routing/polymorphic_routes.rb, line 101 def polymorphic_url(record_or_hash_or_array, options = {}) if Hash === record_or_hash_or_array options = record_or_hash_or_array.merge(options) record = options.delete :id return polymorphic_url record, options end if mapping = polymorphic_mapping(record_or_hash_or_array) return mapping.call(self, [record_or_hash_or_array, options], false) end opts = options.dup action = opts.delete :action type = opts.delete(:routing_type) || :url HelperMethodBuilder.polymorphic_method self, record_or_hash_or_array, action, type, opts end ``` Constructs a call to a named RESTful route for the given record and returns the resulting URL string. For example: ``` # calls post_url(post) polymorphic_url(post) # => "http://example.com/posts/1" polymorphic_url([blog, post]) # => "http://example.com/blogs/1/posts/1" polymorphic_url([:admin, blog, post]) # => "http://example.com/admin/blogs/1/posts/1" polymorphic_url([user, :blog, post]) # => "http://example.com/users/1/blog/posts/1" polymorphic_url(Comment) # => "http://example.com/comments" ``` #### Options * `:action` - Specifies the action prefix for the named route: `:new` or `:edit`. Default is no prefix. * `:routing_type` - Allowed values are `:path` or `:url`. Default is `:url`. Also includes all the options from `url_for`. These include such things as `:anchor` or `:trailing_slash`. Example usage is given below: ``` polymorphic_url([blog, post], anchor: 'my_anchor') # => "http://example.com/blogs/1/posts/1#my_anchor" polymorphic_url([blog, post], anchor: 'my_anchor', script_name: "/my_app") # => "http://example.com/my_app/blogs/1/posts/1#my_anchor" ``` For all of these options, see the documentation for [url\_for](urlfor). #### Functionality ``` # an Article record polymorphic_url(record) # same as article_url(record) # a Comment record polymorphic_url(record) # same as comment_url(record) # it recognizes new records and maps to the collection record = Comment.new polymorphic_url(record) # same as comments_url() # the class of a record will also map to the collection polymorphic_url(Comment) # same as comments_url() ``` rails module ActionDispatch::Routing::UrlFor module ActionDispatch::Routing::UrlFor ======================================= Included modules: [ActionDispatch::Routing::PolymorphicRoutes](polymorphicroutes) In `config/routes.rb` you define URL-to-controller mappings, but the reverse is also possible: a URL can be generated from one of your routing definitions. URL generation functionality is centralized in this module. See [`ActionDispatch::Routing`](../routing) for general information about routing and routes.rb. **Tip:** If you need to generate URLs from your models or some other place, then [`ActionController::UrlFor`](../../actioncontroller/urlfor) is what you're looking for. Read on for an introduction. In general, this module should not be included on its own, as it is usually included by url\_helpers (as in [`Rails.application`](../../rails#method-c-application).routes.url\_helpers). URL generation from parameters ------------------------------ As you may know, some functions, such as ActionController::Base#url\_for and [`ActionView::Helpers::UrlHelper#link_to`](../../actionview/helpers/urlhelper#method-i-link_to), can generate URLs given a set of parameters. For example, you've probably had the chance to write code like this in one of your views: ``` <%= link_to('Click here', controller: 'users', action: 'new', message: 'Welcome!') %> # => <a href="/users/new?message=Welcome%21">Click here</a> ``` link\_to, and all other functions that require URL generation functionality, actually use [`ActionController::UrlFor`](../../actioncontroller/urlfor) under the hood. And in particular, they use the ActionController::UrlFor#url\_for method. One can generate the same path as the above example by using the following code: ``` include UrlFor url_for(controller: 'users', action: 'new', message: 'Welcome!', only_path: true) # => "/users/new?message=Welcome%21" ``` Notice the `only_path: true` part. This is because [`UrlFor`](urlfor) has no information about the website hostname that your Rails app is serving. So if you want to include the hostname as well, then you must also pass the `:host` argument: ``` include UrlFor url_for(controller: 'users', action: 'new', message: 'Welcome!', host: 'www.example.com') # => "http://www.example.com/users/new?message=Welcome%21" ``` By default, all controllers and views have access to a special version of [`url_for`](urlfor#method-i-url_for), that already knows what the current hostname is. So if you use [`url_for`](urlfor#method-i-url_for) in your controllers or your views, then you don't need to explicitly pass the `:host` argument. For convenience reasons, mailers provide a shortcut for ActionController::UrlFor#url\_for. So within mailers, you only have to type `url_for` instead of 'ActionController::UrlFor#url\_for' in full. However, mailers don't have hostname information, and you still have to provide the `:host` argument or set the default host that will be used in all mailers using the configuration option `config.action_mailer.default_url_options`. For more information on [`url_for`](urlfor#method-i-url_for) in mailers read the ActionMailer#Base documentation. URL generation for named routes ------------------------------- [`UrlFor`](urlfor) also allows one to access methods that have been auto-generated from named routes. For example, suppose that you have a 'users' resource in your `config/routes.rb`: ``` resources :users ``` This generates, among other things, the method `users_path`. By default, this method is accessible from your controllers, views and mailers. If you need to access this auto-generated method from other places (such as a model), then you can do that by including [`Rails.application`](../../rails#method-c-application).routes.url\_helpers in your class: ``` class User < ActiveRecord::Base include Rails.application.routes.url_helpers def base_uri user_path(self) end end User.find(1).base_uri # => "/users/1" ``` new(...) Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 106 def initialize(...) @_routes = nil super end ``` Calls superclass method route\_for(name, \*args) Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 213 def route_for(name, *args) public_send(:"#{name}_url", *args) end ``` Allows calling direct or regular named route. ``` resources :buckets direct :recordable do |recording| route_for(:bucket, recording.bucket) end direct :threadable do |threadable| route_for(:recordable, threadable.parent) end ``` This maintains the context of the original caller on whether to return a path or full URL, e.g: ``` threadable_path(threadable) # => "/buckets/1" threadable_url(threadable) # => "http://example.com/buckets/1" ``` url\_for(options = nil) Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 169 def url_for(options = nil) full_url_for(options) end ``` Generate a URL based on the options provided, default\_url\_options and the routes defined in routes.rb. The following options are supported: * `:only_path` - If true, the relative URL is returned. Defaults to `false`. * `:protocol` - The protocol to connect to. Defaults to 'http'. * `:host` - Specifies the host the link should be targeted at. If `:only_path` is false, this option must be provided either explicitly, or via `default_url_options`. * `:subdomain` - Specifies the subdomain of the link, using the `tld_length` to split the subdomain from the host. If false, removes all subdomains from the host part of the link. * `:domain` - Specifies the domain of the link, using the `tld_length` to split the domain from the host. * `:tld_length` - Number of labels the TLD id composed of, only used if `:subdomain` or `:domain` are supplied. Defaults to `ActionDispatch::Http::URL.tld_length`, which in turn defaults to 1. * `:port` - Optionally specify the port to connect to. * `:anchor` - An anchor name to be appended to the path. * `:params` - The query parameters to be appended to the path. * `:trailing_slash` - If true, adds a trailing slash, as in “/archive/2009/” * `:script_name` - Specifies application path relative to domain root. If provided, prepends application path. Any other key (`:controller`, `:action`, etc.) given to `url_for` is forwarded to the Routes module. ``` url_for controller: 'tasks', action: 'testing', host: 'somehost.org', port: '8080' # => 'http://somehost.org:8080/tasks/testing' url_for controller: 'tasks', action: 'testing', host: 'somehost.org', anchor: 'ok', only_path: true # => '/tasks/testing#ok' url_for controller: 'tasks', action: 'testing', trailing_slash: true # => 'http://somehost.org/tasks/testing/' url_for controller: 'tasks', action: 'testing', host: 'somehost.org', number: '33' # => 'http://somehost.org/tasks/testing?number=33' url_for controller: 'tasks', action: 'testing', host: 'somehost.org', script_name: "/myapp" # => 'http://somehost.org/myapp/tasks/testing' url_for controller: 'tasks', action: 'testing', host: 'somehost.org', script_name: "/myapp", only_path: true # => '/myapp/tasks/testing' ``` Missing routes keys may be filled in from the current request's parameters (e.g. `:controller`, `:action`, `:id` and any other parameters that are placed in the path). Given that the current action has been reached through `GET /users/1`: ``` url_for(only_path: true) # => '/users/1' url_for(only_path: true, action: 'edit') # => '/users/1/edit' url_for(only_path: true, action: 'edit', id: 2) # => '/users/2/edit' ``` Notice that no `:id` parameter was provided to the first `url_for` call and the helper used the one from the route's path. Any path parameter implicitly used by `url_for` can always be overwritten like shown on the last `url_for` calls. url\_options() Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 114 def url_options default_url_options end ``` Hook overridden in controller to add request information with `default_url_options`. Application logic should not go into url\_options. optimize\_routes\_generation?() Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 218 def optimize_routes_generation? _routes.optimize_routes_generation? && default_url_options.empty? end ``` \_routes\_context() Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 230 def _routes_context # :doc: self end ``` \_with\_routes(routes) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/url_for.rb, line 223 def _with_routes(routes) # :doc: old_routes, @_routes = @_routes, routes yield ensure @_routes = old_routes end ```
programming_docs
rails module ActionDispatch::Routing::Mapper::Base module ActionDispatch::Routing::Mapper::Base ============================================= default\_url\_options(options) Alias for: [default\_url\_options=](base#method-i-default_url_options-3D) default\_url\_options=(options) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 618 def default_url_options=(options) @set.default_url_options = options end ``` Also aliased as: [default\_url\_options](base#method-i-default_url_options) has\_named\_route?(name) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 630 def has_named_route?(name) @set.named_routes.key?(name) end ``` Query if the following named route was already defined. match(path, options = nil) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 566 def match(path, options = nil) end ``` Matches a URL pattern to one or more routes. You should not use the `match` method in your router without specifying an HTTP method. If you want to expose your action to both GET and POST, use: ``` # sets :controller, :action and :id in params match ':controller/:action/:id', via: [:get, :post] ``` Note that `:controller`, `:action` and `:id` are interpreted as URL query parameters and thus available through `params` in an action. If you want to expose your action to GET, use `get` in the router: Instead of: ``` match ":controller/:action/:id" ``` Do: ``` get ":controller/:action/:id" ``` Two of these symbols are special, `:controller` maps to the controller and `:action` to the controller's action. A pattern can also map wildcard segments (globs) to params: ``` get 'songs/*category/:title', to: 'songs#show' # 'songs/rock/classic/stairway-to-heaven' sets # params[:category] = 'rock/classic' # params[:title] = 'stairway-to-heaven' ``` To match a wildcard parameter, it must have a name assigned to it. Without a variable name to attach the glob parameter to, the route can't be parsed. When a pattern points to an internal route, the route's `:action` and `:controller` should be set in options or hash shorthand. Examples: ``` match 'photos/:id' => 'photos#show', via: :get match 'photos/:id', to: 'photos#show', via: :get match 'photos/:id', controller: 'photos', action: 'show', via: :get ``` A pattern can also point to a `Rack` endpoint i.e. anything that responds to `call`: ``` match 'photos/:id', to: -> (hash) { [200, {}, ["Coming soon"]] }, via: :get match 'photos/:id', to: PhotoRackApp, via: :get # Yes, controller actions are just rack endpoints match 'photos/:id', to: PhotosController.action(:show), via: :get ``` Because requesting various HTTP verbs with a single action has security implications, you must either specify the actions in the via options or use one of the [`HttpHelpers`](httphelpers) instead `match` ### Options Any options not seen here are passed on as params with the URL. :controller The route's controller. :action The route's action. :param Overrides the default resource identifier `:id` (name of the dynamic segment used to generate the routes). You can access that segment from your controller using `params[<:param>]`. In your router: ``` resources :users, param: :name ``` The `users` resource here will have the following routes generated for it: ``` GET /users(.:format) POST /users(.:format) GET /users/new(.:format) GET /users/:name/edit(.:format) GET /users/:name(.:format) PATCH/PUT /users/:name(.:format) DELETE /users/:name(.:format) ``` You can override `ActiveRecord::Base#to_param` of a related model to construct a URL: ``` class User < ActiveRecord::Base def to_param name end end user = User.find_by(name: 'Phusion') user_path(user) # => "/users/Phusion" ``` :path The path prefix for the routes. :module The namespace for :controller. ``` match 'path', to: 'c#a', module: 'sekret', controller: 'posts', via: :get # => Sekret::PostsController ``` See `Scoping#namespace` for its scope equivalent. :as The name used to generate routing helpers. :via Allowed HTTP verb(s) for route. ``` match 'path', to: 'c#a', via: :get match 'path', to: 'c#a', via: [:get, :post] match 'path', to: 'c#a', via: :all ``` :to Points to a `Rack` endpoint. Can be an object that responds to `call` or a string representing a controller's action. ``` match 'path', to: 'controller#action', via: :get match 'path', to: -> (env) { [200, {}, ["Success!"]] }, via: :get match 'path', to: RackApp, via: :get ``` :on Shorthand for wrapping routes in a specific RESTful context. Valid values are `:member`, `:collection`, and `:new`. Only use within `resource(s)` block. For example: ``` resource :bar do match 'foo', to: 'c#a', on: :member, via: [:get, :post] end ``` Is equivalent to: ``` resource :bar do member do match 'foo', to: 'c#a', via: [:get, :post] end end ``` :constraints Constrains parameters with a hash of regular expressions or an object that responds to `matches?`. In addition, constraints other than path can also be specified with any object that responds to `===` (e.g. [`String`](../../../string), [`Array`](../../../array), [`Range`](../../../range), etc.). ``` match 'path/:id', constraints: { id: /[A-Z]\d{5}/ }, via: :get match 'json_only', constraints: { format: 'json' }, via: :get class PermitList def matches?(request) request.remote_ip == '1.2.3.4' end end match 'path', to: 'c#a', constraints: PermitList.new, via: :get ``` See `Scoping#constraints` for more examples with its scope equivalent. :defaults Sets defaults for parameters ``` # Sets params[:format] to 'jpg' by default match 'path', to: 'c#a', defaults: { format: 'jpg' }, via: :get ``` See `Scoping#defaults` for its scope equivalent. :anchor Boolean to anchor a `match` pattern. Default is true. When set to false, the pattern matches any request prefixed with the given path. ``` # Matches any request starting with 'path' match 'path', to: 'c#a', anchor: false, via: :get ``` :format Allows you to specify the default value for optional `format` segment or disable it by supplying `false`. mount(app, options = nil) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 588 def mount(app, options = nil) if options path = options.delete(:at) elsif Hash === app options = app app, path = options.find { |k, _| k.respond_to?(:call) } options.delete(app) if app end raise ArgumentError, "A rack application must be specified" unless app.respond_to?(:call) raise ArgumentError, <<~MSG unless path Must be called with mount point mount SomeRackApp, at: "some_route" or mount(SomeRackApp => "some_route") MSG rails_app = rails_app? app options[:as] ||= app_name(app, rails_app) target_as = name_for_action(options[:as], path) options[:via] ||= :all match(path, options.merge(to: app, anchor: false, format: false)) define_generate_prefix(app, target_as) if rails_app self end ``` Mount a Rack-based application to be used within the application. ``` mount SomeRackApp, at: "some_route" ``` Alternatively: ``` mount(SomeRackApp => "some_route") ``` For options, see `match`, as `mount` uses it internally. All mounted applications come with routing helpers to access them. These are named after the class specified, so for the above example the helper is either `some_rack_app_path` or `some_rack_app_url`. To customize this helper's name, use the `:as` option: ``` mount(SomeRackApp => "some_route", as: "exciting") ``` This will generate the `exciting_path` and `exciting_url` helpers which can be used to navigate to this mounted app. with\_default\_scope(scope, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 623 def with_default_scope(scope, &block) scope(scope) do instance_exec(&block) end end ``` rails module ActionDispatch::Routing::Mapper::HttpHelpers module ActionDispatch::Routing::Mapper::HttpHelpers ==================================================== delete(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 719 def delete(*args, &block) map_method(:delete, args, &block) end ``` Define a route that only recognizes HTTP DELETE. For supported arguments, see [match](base#method-i-match) ``` delete 'broccoli', to: 'food#broccoli' ``` get(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 687 def get(*args, &block) map_method(:get, args, &block) end ``` Define a route that only recognizes HTTP GET. For supported arguments, see [match](base#method-i-match) ``` get 'bacon', to: 'food#bacon' ``` options(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 727 def options(*args, &block) map_method(:options, args, &block) end ``` Define a route that only recognizes HTTP OPTIONS. For supported arguments, see [match](base#method-i-match) ``` options 'carrots', to: 'food#carrots' ``` patch(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 703 def patch(*args, &block) map_method(:patch, args, &block) end ``` Define a route that only recognizes HTTP PATCH. For supported arguments, see [match](base#method-i-match) ``` patch 'bacon', to: 'food#bacon' ``` post(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 695 def post(*args, &block) map_method(:post, args, &block) end ``` Define a route that only recognizes HTTP POST. For supported arguments, see [match](base#method-i-match) ``` post 'bacon', to: 'food#bacon' ``` put(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 711 def put(*args, &block) map_method(:put, args, &block) end ``` Define a route that only recognizes HTTP PUT. For supported arguments, see [match](base#method-i-match) ``` put 'bacon', to: 'food#bacon' ``` rails module ActionDispatch::Routing::Mapper::Resources module ActionDispatch::Routing::Mapper::Resources ================================================== Resource routing allows you to quickly declare all of the common routes for a given resourceful controller. Instead of declaring separate routes for your `index`, `show`, `new`, `edit`, `create`, `update` and `destroy` actions, a resourceful route declares them in a single line of code: ``` resources :photos ``` Sometimes, you have a resource that clients always look up without referencing an ID. A common example, /profile always shows the profile of the currently logged in user. In this case, you can use a singular resource to map /profile (rather than /profile/:id) to the show action. ``` resource :profile ``` It's common to have resources that are logically children of other resources: ``` resources :magazines do resources :ads end ``` You may wish to organize groups of controllers under a namespace. Most commonly, you might group a number of administrative controllers under an `admin` namespace. You would place these controllers under the `app/controllers/admin` directory, and you can group them together in your router: ``` namespace "admin" do resources :posts, :comments end ``` By default the `:id` parameter doesn't accept dots. If you need to use dots as part of the `:id` parameter add a constraint which overrides this restriction, e.g: ``` resources :articles, id: /[^\/]+/ ``` This allows any character other than a slash as part of your `:id`. CANONICAL\_ACTIONS RESOURCE\_OPTIONS VALID\_ON\_OPTIONS [`CANONICAL_ACTIONS`](resources#CANONICAL_ACTIONS) holds all actions that does not need a prefix or a path appended since they fit properly in their scope level. collection(&block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1500 def collection(&block) unless resource_scope? raise ArgumentError, "can't use collection outside resource(s) scope" end with_scope_level(:collection) do path_scope(parent_resource.collection_scope, &block) end end ``` To add a route to the collection: ``` resources :photos do collection do get 'search' end end ``` This will enable Rails to recognize paths such as `/photos/search` with GET, and route to the search action of `PhotosController`. It will also create the `search_photos_url` and `search_photos_path` route helpers. draw(name) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1587 def draw(name) path = @draw_paths.find do |_path| File.exist? "#{_path}/#{name}.rb" end unless path msg = "Your router tried to #draw the external file #{name}.rb,\n" \ "but the file was not found in:\n\n" msg += @draw_paths.map { |_path| " * #{_path}" }.join("\n") raise ArgumentError, msg end route_path = "#{path}/#{name}.rb" instance_eval(File.read(route_path), route_path.to_s) end ``` match(path, \*rest, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1609 def match(path, *rest, &block) if rest.empty? && Hash === path options = path path, to = options.find { |name, _value| name.is_a?(String) } raise ArgumentError, "Route path not specified" if path.nil? case to when Symbol options[:action] = to when String if /#/.match?(to) options[:to] = to else options[:controller] = to end else options[:to] = to end options.delete(path) paths = [path] else options = rest.pop || {} paths = [path] + rest end if options.key?(:defaults) defaults(options.delete(:defaults)) { map_match(paths, options, &block) } else map_match(paths, options, &block) end end ``` Matches a URL pattern to one or more routes. For more information, see [match](base#method-i-match). ``` match 'path' => 'controller#action', via: :patch match 'path', to: 'controller#action', via: :post match 'path', 'otherpath', on: :member, via: :get ``` member(&block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1521 def member(&block) unless resource_scope? raise ArgumentError, "can't use member outside resource(s) scope" end with_scope_level(:member) do if shallow? shallow_scope { path_scope(parent_resource.member_scope, &block) } else path_scope(parent_resource.member_scope, &block) end end end ``` To add a member route, add a member block into the resource block: ``` resources :photos do member do get 'preview' end end ``` This will recognize `/photos/1/preview` with GET, and route to the preview action of `PhotosController`. It will also create the `preview_photo_url` and `preview_photo_path` helpers. namespace(path, options = {}) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1568 def namespace(path, options = {}) if resource_scope? nested { super } else super end end ``` See [`ActionDispatch::Routing::Mapper::Scoping#namespace`](scoping#method-i-namespace). Calls superclass method nested(&block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1547 def nested(&block) unless resource_scope? raise ArgumentError, "can't use nested outside resource(s) scope" end with_scope_level(:nested) do if shallow? && shallow_nesting_depth >= 1 shallow_scope do path_scope(parent_resource.nested_scope) do scope(nested_options, &block) end end else path_scope(parent_resource.nested_scope) do scope(nested_options, &block) end end end end ``` new(&block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1537 def new(&block) unless resource_scope? raise ArgumentError, "can't use new outside resource(s) scope" end with_scope_level(:new) do path_scope(parent_resource.new_scope(action_path(:new)), &block) end end ``` resource(\*resources) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1292 def resource(*resources, &block) options = resources.extract_options!.dup if apply_common_behavior_for(:resource, resources, options, &block) return self end with_scope_level(:resource) do options = apply_action_options options resource_scope(SingletonResource.new(resources.pop, api_only?, @scope[:shallow], options)) do yield if block_given? concerns(options[:concerns]) if options[:concerns] new do get :new end if parent_resource.actions.include?(:new) set_member_mappings_for_resource collection do post :create end if parent_resource.actions.include?(:create) end end self end ``` Sometimes, you have a resource that clients always look up without referencing an ID. A common example, /profile always shows the profile of the currently logged in user. In this case, you can use a singular resource to map /profile (rather than /profile/:id) to the show action: ``` resource :profile ``` This creates six different routes in your application, all mapping to the `Profiles` controller (note that the controller is named after the plural): ``` GET /profile/new GET /profile GET /profile/edit PATCH/PUT /profile DELETE /profile POST /profile ``` If you want instances of a model to work with this resource via record identification (e.g. in `form_with` or `redirect_to`), you will need to call [resolve](customurls#method-i-resolve): ``` resource :profile resolve('Profile') { [:profile] } # Enables this to work with singular routes: form_with(model: @profile) {} ``` ### Options Takes same options as [resources](resources#method-i-resources) resources(\*resources) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1458 def resources(*resources, &block) options = resources.extract_options!.dup if apply_common_behavior_for(:resources, resources, options, &block) return self end with_scope_level(:resources) do options = apply_action_options options resource_scope(Resource.new(resources.pop, api_only?, @scope[:shallow], options)) do yield if block_given? concerns(options[:concerns]) if options[:concerns] collection do get :index if parent_resource.actions.include?(:index) post :create if parent_resource.actions.include?(:create) end new do get :new end if parent_resource.actions.include?(:new) set_member_mappings_for_resource end end self end ``` In Rails, a resourceful route provides a mapping between HTTP verbs and URLs and controller actions. By convention, each action also maps to particular CRUD operations in a database. A single entry in the routing file, such as ``` resources :photos ``` creates seven different routes in your application, all mapping to the `Photos` controller: ``` GET /photos GET /photos/new POST /photos GET /photos/:id GET /photos/:id/edit PATCH/PUT /photos/:id DELETE /photos/:id ``` [`Resources`](resources) can also be nested infinitely by using this block syntax: ``` resources :photos do resources :comments end ``` This generates the following comments routes: ``` GET /photos/:photo_id/comments GET /photos/:photo_id/comments/new POST /photos/:photo_id/comments GET /photos/:photo_id/comments/:id GET /photos/:photo_id/comments/:id/edit PATCH/PUT /photos/:photo_id/comments/:id DELETE /photos/:photo_id/comments/:id ``` ### Options Takes same options as [match](base#method-i-match) as well as: :path\_names Allows you to change the segment component of the `edit` and `new` actions. Actions not specified are not changed. ``` resources :posts, path_names: { new: "brand_new" } ``` The above example will now change /posts/new to /posts/brand\_new. :path Allows you to change the path prefix for the resource. ``` resources :posts, path: 'postings' ``` The resource and all segments will now route to /postings instead of /posts. :only Only generate routes for the given actions. ``` resources :cows, only: :show resources :cows, only: [:show, :index] ``` :except Generate all routes except for the given actions. ``` resources :cows, except: :show resources :cows, except: [:show, :index] ``` :shallow Generates shallow routes for nested resource(s). When placed on a parent resource, generates shallow routes for all nested resources. ``` resources :posts, shallow: true do resources :comments end ``` Is the same as: ``` resources :posts do resources :comments, except: [:show, :edit, :update, :destroy] end resources :comments, only: [:show, :edit, :update, :destroy] ``` This allows URLs for resources that otherwise would be deeply nested such as a comment on a blog post like `/posts/a-long-permalink/comments/1234` to be shortened to just `/comments/1234`. Set `shallow: false` on a child resource to ignore a parent's shallow parameter. :shallow\_path Prefixes nested shallow routes with the specified path. ``` scope shallow_path: "sekret" do resources :posts do resources :comments, shallow: true end end ``` The `comments` resource here will have the following routes generated for it: ``` post_comments GET /posts/:post_id/comments(.:format) post_comments POST /posts/:post_id/comments(.:format) new_post_comment GET /posts/:post_id/comments/new(.:format) edit_comment GET /sekret/comments/:id/edit(.:format) comment GET /sekret/comments/:id(.:format) comment PATCH/PUT /sekret/comments/:id(.:format) comment DELETE /sekret/comments/:id(.:format) ``` :shallow\_prefix Prefixes nested shallow route names with specified prefix. ``` scope shallow_prefix: "sekret" do resources :posts do resources :comments, shallow: true end end ``` The `comments` resource here will have the following routes generated for it: ``` post_comments GET /posts/:post_id/comments(.:format) post_comments POST /posts/:post_id/comments(.:format) new_post_comment GET /posts/:post_id/comments/new(.:format) edit_sekret_comment GET /comments/:id/edit(.:format) sekret_comment GET /comments/:id(.:format) sekret_comment PATCH/PUT /comments/:id(.:format) sekret_comment DELETE /comments/:id(.:format) ``` :format Allows you to specify the default value for optional `format` segment or disable it by supplying `false`. :param Allows you to override the default param name of `:id` in the URL. ### Examples ``` # routes call <tt>Admin::PostsController</tt> resources :posts, module: "admin" # resource actions are at /admin/posts. resources :posts, path: "admin/posts" ``` resources\_path\_names(options) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1257 def resources_path_names(options) @scope[:path_names].merge!(options) end ``` root(path, options = {}) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1656 def root(path, options = {}) if path.is_a?(String) options[:to] = path elsif path.is_a?(Hash) && options.empty? options = path else raise ArgumentError, "must be called with a path and/or options" end if @scope.resources? with_scope_level(:root) do path_scope(parent_resource.path) do match_root_route(options) end end else match_root_route(options) end end ``` You can specify what Rails should route “/” to with the root method: ``` root to: 'pages#main' ``` For options, see `match`, as `root` uses it internally. You can also pass a string which will expand ``` root 'pages#main' ``` You should put the root route at the top of `config/routes.rb`, because this means it will be matched first. As this is the most popular route of most Rails applications, this is beneficial. shallow() { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1576 def shallow @scope = @scope.new(shallow: true) yield ensure @scope = @scope.parent end ``` shallow?() Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1583 def shallow? !parent_resource.singleton? && @scope[:shallow] end ``` api\_only?() Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1856 def api_only? # :doc: @set.api_only? end ``` set\_member\_mappings\_for\_resource() Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1844 def set_member_mappings_for_resource # :doc: member do get :edit if parent_resource.actions.include?(:edit) get :show if parent_resource.actions.include?(:show) if parent_resource.actions.include?(:update) patch :update put :update end delete :destroy if parent_resource.actions.include?(:destroy) end end ``` with\_scope\_level(kind) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1740 def with_scope_level(kind) # :doc: @scope = @scope.new_level(kind) yield ensure @scope = @scope.parent end ```
programming_docs
rails module ActionDispatch::Routing::Mapper::Scoping module ActionDispatch::Routing::Mapper::Scoping ================================================ You may wish to organize groups of controllers under a namespace. Most commonly, you might group a number of administrative controllers under an `admin` namespace. You would place these controllers under the `app/controllers/admin` directory, and you can group them together in your router: ``` namespace "admin" do resources :posts, :comments end ``` This will create a number of routes for each of the posts and comments controller. For `Admin::PostsController`, Rails will create: ``` GET /admin/posts GET /admin/posts/new POST /admin/posts GET /admin/posts/1 GET /admin/posts/1/edit PATCH/PUT /admin/posts/1 DELETE /admin/posts/1 ``` If you want to route /posts (without the prefix /admin) to `Admin::PostsController`, you could use ``` scope module: "admin" do resources :posts end ``` or, for a single case ``` resources :posts, module: "admin" ``` If you want to route /admin/posts to `PostsController` (without the `Admin::` module prefix), you could use ``` scope "/admin" do resources :posts end ``` or, for a single case ``` resources :posts, path: "/admin/posts" ``` In each of these cases, the named routes remain the same as if you did not use scope. In the last case, the following paths map to `PostsController`: ``` GET /admin/posts GET /admin/posts/new POST /admin/posts GET /admin/posts/1 GET /admin/posts/1/edit PATCH/PUT /admin/posts/1 DELETE /admin/posts/1 ``` constraints(constraints = {}, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 999 def constraints(constraints = {}, &block) scope(constraints: constraints, &block) end ``` ### Parameter Restriction Allows you to constrain the nested routes based on a set of rules. For instance, in order to change the routes to allow for a dot character in the `id` parameter: ``` constraints(id: /\d+\.\d+/) do resources :posts end ``` Now routes such as `/posts/1` will no longer be valid, but `/posts/1.1` will be. The `id` parameter must match the constraint passed in for this example. You may use this to also restrict other parameters: ``` resources :posts do constraints(post_id: /\d+\.\d+/) do resources :comments end end ``` ### Restricting based on IP Routes can also be constrained to an IP or a certain range of IP addresses: ``` constraints(ip: /192\.168\.\d+\.\d+/) do resources :posts end ``` Any user connecting from the 192.168.\* range will be able to see this resource, where as any user connecting outside of this range will be told there is no such route. ### Dynamic request matching Requests to routes can be constrained based on specific criteria: ``` constraints(-> (req) { /iPhone/.match?(req.env["HTTP_USER_AGENT"]) }) do resources :iphones end ``` You are able to move this logic out into a class if it is too complex for routes. This class must have a `matches?` method defined on it which either returns `true` if the user should be given access to that route, or `false` if the user should not. ``` class Iphone def self.matches?(request) /iPhone/.match?(request.env["HTTP_USER_AGENT"]) end end ``` An expected place for this code would be `lib/constraints`. This class is then used like this: ``` constraints(Iphone) do resources :iphones end ``` controller(controller) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 884 def controller(controller) @scope = @scope.new(controller: controller) yield ensure @scope = @scope.parent end ``` Scopes routes to a specific controller ``` controller "food" do match "bacon", action: :bacon, via: :get end ``` defaults(defaults = {}) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 1008 def defaults(defaults = {}) @scope = @scope.new(defaults: merge_defaults_scope(@scope[:defaults], defaults)) yield ensure @scope = @scope.parent end ``` Allows you to set default parameters for a route, such as this: ``` defaults id: 'home' do match 'scoped_pages/(:id)', to: 'pages#show' end ``` Using this, the `:id` parameter here will default to 'home'. namespace(path, options = {}, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 929 def namespace(path, options = {}, &block) path = path.to_s defaults = { module: path, as: options.fetch(:as, path), shallow_path: options.fetch(:path, path), shallow_prefix: options.fetch(:as, path) } path_scope(options.delete(:path) { path }) do scope(defaults.merge!(options), &block) end end ``` Scopes routes to a specific namespace. For example: ``` namespace :admin do resources :posts end ``` This generates the following routes: ``` admin_posts GET /admin/posts(.:format) admin/posts#index admin_posts POST /admin/posts(.:format) admin/posts#create new_admin_post GET /admin/posts/new(.:format) admin/posts#new edit_admin_post GET /admin/posts/:id/edit(.:format) admin/posts#edit admin_post GET /admin/posts/:id(.:format) admin/posts#show admin_post PATCH/PUT /admin/posts/:id(.:format) admin/posts#update admin_post DELETE /admin/posts/:id(.:format) admin/posts#destroy ``` ### Options The `:path`, `:as`, `:module`, `:shallow_path` and `:shallow_prefix` options all default to the name of the namespace. For options, see `Base#match`. For `:shallow_path` option, see `Resources#resources`. ``` # accessible through /sekret/posts rather than /admin/posts namespace :admin, path: "sekret" do resources :posts end # maps to <tt>Sekret::PostsController</tt> rather than <tt>Admin::PostsController</tt> namespace :admin, module: "sekret" do resources :posts end # generates +sekret_posts_path+ rather than +admin_posts_path+ namespace :admin, as: "sekret" do resources :posts end ``` scope(\*args) { || ... } Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 825 def scope(*args) options = args.extract_options!.dup scope = {} options[:path] = args.flatten.join("/") if args.any? options[:constraints] ||= {} unless nested_scope? options[:shallow_path] ||= options[:path] if options.key?(:path) options[:shallow_prefix] ||= options[:as] if options.key?(:as) end if options[:constraints].is_a?(Hash) defaults = options[:constraints].select do |k, v| URL_OPTIONS.include?(k) && (v.is_a?(String) || v.is_a?(Integer)) end options[:defaults] = defaults.merge(options[:defaults] || {}) else block, options[:constraints] = options[:constraints], {} end if options.key?(:only) || options.key?(:except) scope[:action_options] = { only: options.delete(:only), except: options.delete(:except) } end if options.key? :anchor raise ArgumentError, "anchor is ignored unless passed to `match`" end @scope.options.each do |option| if option == :blocks value = block elsif option == :options value = options else value = options.delete(option) { POISON } end unless POISON == value scope[option] = send("merge_#{option}_scope", @scope[option], value) end end @scope = @scope.new scope yield self ensure @scope = @scope.parent end ``` Scopes a set of routes to the given default options. Take the following route definition as an example: ``` scope path: ":account_id", as: "account" do resources :projects end ``` This generates helpers such as `account_projects_path`, just like `resources` does. The difference here being that the routes generated are like /:account\_id/projects, rather than /accounts/:account\_id/projects. ### Options Takes same options as `Base#match` and `Resources#resources`. ``` # route /posts (without the prefix /admin) to <tt>Admin::PostsController</tt> scope module: "admin" do resources :posts end # prefix the posts resource's requests with '/admin' scope path: "/admin" do resources :posts end # prefix the routing helper name: +sekret_posts_path+ instead of +posts_path+ scope as: "sekret" do resources :posts end ``` rails module ActionDispatch::Routing::Mapper::CustomUrls module ActionDispatch::Routing::Mapper::CustomUrls =================================================== direct(name, options = {}, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 2124 def direct(name, options = {}, &block) unless @scope.root? raise RuntimeError, "The direct method can't be used inside a routes scope block" end @set.add_url_helper(name, options, &block) end ``` Define custom URL helpers that will be added to the application's routes. This allows you to override and/or replace the default behavior of routing helpers, e.g: ``` direct :homepage do "https://rubyonrails.org" end direct :commentable do |model| [ model, anchor: model.dom_id ] end direct :main do { controller: "pages", action: "index", subdomain: "www" } end ``` The return value from the block passed to `direct` must be a valid set of arguments for `url_for` which will actually build the URL string. This can be one of the following: * A string, which is treated as a generated URL * A hash, e.g. `{ controller: "pages", action: "index" }` * An array, which is passed to `polymorphic_url` * An Active Model instance * An Active Model class NOTE: Other URL helpers can be called in the block but be careful not to invoke your custom URL helper again otherwise it will result in a stack overflow error. You can also specify default options that will be passed through to your URL helper definition, e.g: ``` direct :browse, page: 1, size: 10 do |options| [ :products, options.merge(params.permit(:page, :size).to_h.symbolize_keys) ] end ``` In this instance the `params` object comes from the context in which the block is executed, e.g. generating a URL inside a controller action or a view. If the block is executed where there isn't a `params` object such as this: ``` Rails.application.routes.url_helpers.browse_path ``` then it will raise a `NameError`. Because of this you need to be aware of the context in which you will use your custom URL helper when defining it. NOTE: The `direct` method can't be used inside of a scope block such as `namespace` or `scope` and will raise an error if it detects that it is. resolve(\*args, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 2176 def resolve(*args, &block) unless @scope.root? raise RuntimeError, "The resolve method can't be used inside a routes scope block" end options = args.extract_options! args = args.flatten(1) args.each do |klass| @set.add_polymorphic_mapping(klass, options, &block) end end ``` Define custom polymorphic mappings of models to URLs. This alters the behavior of `polymorphic_url` and consequently the behavior of `link_to` and `form_for` when passed a model instance, e.g: ``` resource :basket resolve "Basket" do [:basket] end ``` This will now generate “/basket” when a `Basket` instance is passed to `link_to` or `form_for` instead of the standard “/baskets/:id”. NOTE: This custom behavior only applies to simple polymorphic URLs where a single model instance is passed and not more complicated forms, e.g: ``` # config/routes.rb resource :profile namespace :admin do resources :users end resolve("User") { [:profile] } # app/views/application/_menu.html.erb link_to "Profile", @current_user link_to "Profile", [:admin, @current_user] ``` The first `link_to` will generate “/profile” but the second will generate the standard polymorphic URL of “/admin/users/1”. You can pass options to a polymorphic mapping - the arity for the block needs to be two as the instance is passed as the first argument, e.g: ``` resolve "Basket", anchor: "items" do |basket, options| [:basket, options] end ``` This generates the URL “/basket#items” because when the last item in an array passed to `polymorphic_url` is a hash then it's treated as options to the URL helper that gets called. NOTE: The `resolve` method can't be used inside of a scope block such as `namespace` or `scope` and will raise an error if it detects that it is. rails module ActionDispatch::Routing::Mapper::Concerns module ActionDispatch::Routing::Mapper::Concerns ================================================= [`Routing`](../../routing) [`Concerns`](concerns) allow you to declare common routes that can be reused inside others resources and routes. ``` concern :commentable do resources :comments end concern :image_attachable do resources :images, only: :index end ``` These concerns are used in [`Resources`](resources) routing: ``` resources :messages, concerns: [:commentable, :image_attachable] ``` or in a scope or namespace: ``` namespace :posts do concerns :commentable end ``` concern(name, callable = nil, &block) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 2048 def concern(name, callable = nil, &block) callable ||= lambda { |mapper, options| mapper.instance_exec(options, &block) } @concerns[name] = callable end ``` Define a routing concern using a name. [`Concerns`](concerns) may be defined inline, using a block, or handled by another object, by passing that object as the second parameter. The concern object, if supplied, should respond to `call`, which will receive two parameters: ``` * The current mapper * A hash of options which the concern object may use ``` Options may also be used by concerns defined in a block by accepting a block parameter. So, using a block, you might do something as simple as limit the actions available on certain resources, passing standard resource options through the concern: ``` concern :commentable do |options| resources :comments, options end resources :posts, concerns: :commentable resources :archived_posts do # Don't allow comments on archived posts concerns :commentable, only: [:index, :show] end ``` Or, using a callable object, you might implement something more specific to your application, which would be out of place in your routes file. ``` # purchasable.rb class Purchasable def initialize(defaults = {}) @defaults = defaults end def call(mapper, options = {}) options = @defaults.merge(options) mapper.resources :purchases mapper.resources :receipts mapper.resources :returns if options[:returnable] end end # routes.rb concern :purchasable, Purchasable.new(returnable: true) resources :toys, concerns: :purchasable resources :electronics, concerns: :purchasable resources :pets do concerns :purchasable, returnable: false end ``` Any routing helpers can be used inside a concern. If using a callable, they're accessible from the [`Mapper`](../mapper) that's passed to `call`. concerns(\*args) Show source ``` # File actionpack/lib/action_dispatch/routing/mapper.rb, line 2064 def concerns(*args) options = args.extract_options! args.flatten.each do |name| if concern = @concerns[name] concern.call(self, options) else raise ArgumentError, "No concern named #{name} was found!" end end end ``` Use the named concerns ``` resources :posts do concerns :commentable end ``` [`Concerns`](concerns) also work in any routes helper that you want to use: ``` namespace :posts do concerns :commentable end ``` rails module ActionDispatch::Cookies::ChainedCookieJars module ActionDispatch::Cookies::ChainedCookieJars ================================================== Include in a cookie jar to allow chaining, e.g. cookies.permanent.signed. encrypted() Show source ``` # File actionpack/lib/action_dispatch/middleware/cookies.rb, line 249 def encrypted @encrypted ||= EncryptedKeyRotatingCookieJar.new(self) end ``` Returns a jar that'll automatically encrypt cookie values before sending them to the client and will decrypt them for read. If the cookie was tampered with by the user (or a 3rd party), `nil` will be returned. If `config.action_dispatch.encrypted_cookie_salt` and `config.action_dispatch.encrypted_signed_cookie_salt` are both set, legacy cookies encrypted with HMAC AES-256-CBC will be transparently upgraded. This jar requires that you set a suitable secret for the verification on your app's `secret_key_base`. Example: ``` cookies.encrypted[:discount] = 45 # => Set-Cookie: discount=DIQ7fw==--K3n//8vvnSbGq9dA--7Xh91HfLpwzbj1czhBiwOg==; path=/ cookies.encrypted[:discount] # => 45 ``` permanent() Show source ``` # File actionpack/lib/action_dispatch/middleware/cookies.rb, line 215 def permanent @permanent ||= PermanentCookieJar.new(self) end ``` Returns a jar that'll automatically set the assigned cookies to have an expiration date 20 years from now. Example: ``` cookies.permanent[:prefers_open_id] = true # => Set-Cookie: prefers_open_id=true; path=/; expires=Sun, 16-Dec-2029 03:24:16 GMT ``` This jar is only meant for writing. You'll read permanent cookies through the regular accessor. This jar allows chaining with the signed jar as well, so you can set permanent, signed cookies. Examples: ``` cookies.permanent.signed[:remember_me] = current_user.id # => Set-Cookie: remember_me=BAhU--848956038e692d7046deab32b7131856ab20e14e; path=/; expires=Sun, 16-Dec-2029 03:24:16 GMT ``` signed() Show source ``` # File actionpack/lib/action_dispatch/middleware/cookies.rb, line 231 def signed @signed ||= SignedKeyRotatingCookieJar.new(self) end ``` Returns a jar that'll automatically generate a signed representation of cookie value and verify it when reading from the cookie again. This is useful for creating cookies with values that the user is not supposed to change. If a signed cookie was tampered with by the user (or a 3rd party), `nil` will be returned. This jar requires that you set a suitable secret for the verification on your app's `secret_key_base`. Example: ``` cookies.signed[:discount] = 45 # => Set-Cookie: discount=BAhpMg==--2c1c6906c90a3bc4fd54a51ffb41dffa4bf6b5f7; path=/ cookies.signed[:discount] # => 45 ``` signed\_or\_encrypted() Show source ``` # File actionpack/lib/action_dispatch/middleware/cookies.rb, line 255 def signed_or_encrypted @signed_or_encrypted ||= if request.secret_key_base.present? encrypted else signed end end ``` Returns the `signed` or `encrypted` jar, preferring `encrypted` if `secret_key_base` is set. Used by [`ActionDispatch::Session::CookieStore`](../session/cookiestore) to avoid the need to introduce new cookie stores. rails class ActionDispatch::PermissionsPolicy::Middleware class ActionDispatch::PermissionsPolicy::Middleware ==================================================== Parent: [Object](../../object) CONTENT\_TYPE POLICY The Feature-Policy header has been renamed to Permissions-Policy. The Permissions-Policy requires a different implementation and isn't yet supported by all browsers. To avoid having to rename this middleware in the future we use the new name for the middleware but keep the old header name and implementation for now. new(app) Show source ``` # File actionpack/lib/action_dispatch/http/permissions_policy.rb, line 16 def initialize(app) @app = app end ``` call(env) Show source ``` # File actionpack/lib/action_dispatch/http/permissions_policy.rb, line 20 def call(env) request = ActionDispatch::Request.new(env) _, headers, _ = response = @app.call(env) return response unless html_response?(headers) return response if policy_present?(headers) if policy = request.permissions_policy headers[POLICY] = policy.build(request.controller_instance) end if policy_empty?(policy) headers.delete(POLICY) end response end ```
programming_docs
rails class ActionDispatch::Session::MemCacheStore class ActionDispatch::Session::MemCacheStore ============================================= Parent: Rack::Session::Dalli Included modules: ActionDispatch::Session::Compatibility, ActionDispatch::Session::StaleSessionCheck A session store that uses MemCache to implement storage. #### Options * `expire_after` - The length of time a session will be stored before automatically expiring. new(app, options = {}) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/mem_cache_store.rb, line 22 def initialize(app, options = {}) options[:expire_after] ||= options[:expires] super end ``` Calls superclass method `ActionDispatch::Session::Compatibility::new` rails class ActionDispatch::Session::CookieStore class ActionDispatch::Session::CookieStore =========================================== Parent: ActionDispatch::Session::AbstractSecureStore This cookie-based session store is the Rails default. It is dramatically faster than the alternatives. Sessions typically contain at most a user\_id and flash message; both fit within the 4096 bytes cookie size limit. A CookieOverflow exception is raised if you attempt to store more than 4096 bytes of data. The cookie jar used for storage is automatically configured to be the best possible option given your application's configuration. Your cookies will be encrypted using your apps secret\_key\_base. This goes a step further than signed cookies in that encrypted cookies cannot be altered or read by users. This is the default starting in Rails 4. Configure your session store in an initializer: ``` Rails.application.config.session_store :cookie_store, key: '_your_app_session' ``` In the development and test environments your application's secret key base is generated by Rails and stored in a temporary file in `tmp/development_secret.txt`. In all other environments, it is stored encrypted in the `config/credentials.yml.enc` file. If your application was not updated to Rails 5.2 defaults, the secret\_key\_base will be found in the old `config/secrets.yml` file. Note that changing your secret\_key\_base will invalidate all existing session. Additionally, you should take care to make sure you are not relying on the ability to decode signed cookies generated by your app in external applications or JavaScript before changing it. Because [`CookieStore`](cookiestore) extends Rack::Session::Abstract::Persisted, many of the options described there can be used to customize the session cookie that is generated. For example: ``` Rails.application.config.session_store :cookie_store, expire_after: 14.days ``` would set the session cookie to expire automatically 14 days after creation. Other useful options include `:key`, `:secure` and `:httponly`. new(app, options = {}) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cookie_store.rb, line 59 def initialize(app, options = {}) super(app, options.merge!(cookie_only: true)) end ``` Calls superclass method `ActionDispatch::Session::Compatibility::new` delete\_session(req, session\_id, options) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cookie_store.rb, line 63 def delete_session(req, session_id, options) new_sid = generate_sid unless options[:drop] # Reset hash and Assign the new session id req.set_header("action_dispatch.request.unsigned_session_cookie", new_sid ? { "session_id" => new_sid.public_id } : {}) new_sid end ``` load\_session(req) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cookie_store.rb, line 70 def load_session(req) stale_session_check! do data = unpacked_cookie_data(req) data = persistent_session_id!(data) [Rack::Session::SessionId.new(data["session_id"]), data] end end ``` rails class ActionDispatch::Session::CacheStore class ActionDispatch::Session::CacheStore ========================================== Parent: ActionDispatch::Session::AbstractSecureStore A session store that uses an [`ActiveSupport::Cache::Store`](../../activesupport/cache/store) to store the sessions. This store is most useful if you don't store critical data in your sessions and you don't need them to live for extended periods of time. #### Options * `cache` - The cache to use. If it is not specified, `Rails.cache` will be used. * `expire_after` - The length of time a session will be stored before automatically expiring. By default, the `:expires_in` option of the cache is used. new(app, options = {}) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cache_store.rb, line 16 def initialize(app, options = {}) @cache = options[:cache] || Rails.cache options[:expire_after] ||= @cache.options[:expires_in] super end ``` Calls superclass method `ActionDispatch::Session::Compatibility::new` delete\_session(env, sid, options) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cache_store.rb, line 42 def delete_session(env, sid, options) @cache.delete(cache_key(sid.private_id)) @cache.delete(cache_key(sid.public_id)) generate_sid end ``` Remove a session from the cache. find\_session(env, sid) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cache_store.rb, line 23 def find_session(env, sid) unless sid && (session = get_session_with_fallback(sid)) sid, session = generate_sid, {} end [sid, session] end ``` Get a session from the cache. write\_session(env, sid, session, options) Show source ``` # File actionpack/lib/action_dispatch/middleware/session/cache_store.rb, line 31 def write_session(env, sid, session, options) key = cache_key(sid.private_id) if session @cache.write(key, session, expires_in: options[:expire_after]) else @cache.delete(key) end sid end ``` Set a session in the cache. rails The Basics of Creating Rails Plugins The Basics of Creating Rails Plugins ==================================== A Rails plugin is either an extension or a modification of the core framework. Plugins provide: * A way for developers to share bleeding-edge ideas without hurting the stable code base. * A segmented architecture so that units of code can be fixed or updated on their own release schedule. * An outlet for the core developers so that they don't have to include every cool new feature under the sun. After reading this guide, you will know: * How to create a plugin from scratch. * How to write and run tests for the plugin. This guide describes how to build a test-driven plugin that will: * Extend core Ruby classes like Hash and String. * Add methods to `ApplicationRecord` in the tradition of the `acts_as` plugins. * Give you information about where to put generators in your plugin. For the purpose of this guide pretend for a moment that you are an avid bird watcher. Your favorite bird is the Yaffle, and you want to create a plugin that allows other developers to share in the Yaffle goodness. Chapters -------- 1. [Setup](#setup) * [Generate a gemified plugin.](#generate-a-gemified-plugin) 2. [Testing Your Newly Generated Plugin](#testing-your-newly-generated-plugin) 3. [Extending Core Classes](#extending-core-classes) 4. [Add an "acts\_as" Method to Active Record](#add-an-acts-as-method-to-active-record) * [Add a Class Method](#add-a-class-method) * [Add an Instance Method](#add-an-instance-method) 5. [Generators](#generators) 6. [Publishing Your Gem](#publishing-your-gem) 7. [RDoc Documentation](#rdoc-documentation) * [References](#references) [1 Setup](#setup) ----------------- Currently, Rails plugins are built as gems, *gemified plugins*. They can be shared across different Rails applications using RubyGems and Bundler if desired. ### [1.1 Generate a gemified plugin.](#generate-a-gemified-plugin) Rails ships with a `rails plugin new` command which creates a skeleton for developing any kind of Rails extension with the ability to run integration tests using a dummy Rails application. Create your plugin with the command: ``` $ rails plugin new yaffle ``` See usage and options by asking for help: ``` $ rails plugin new --help ``` [2 Testing Your Newly Generated Plugin](#testing-your-newly-generated-plugin) ----------------------------------------------------------------------------- You can navigate to the directory that contains the plugin, run the `bundle install` command and run the one generated test using the `bin/test` command. You should see: ``` 1 runs, 1 assertions, 0 failures, 0 errors, 0 skips ``` This will tell you that everything got generated properly, and you are ready to start adding functionality. [3 Extending Core Classes](#extending-core-classes) --------------------------------------------------- This section will explain how to add a method to String that will be available anywhere in your Rails application. In this example you will add a method to String named `to_squawk`. To begin, create a new test file with a few assertions: ``` # yaffle/test/core_ext_test.rb require "test_helper" class CoreExtTest < ActiveSupport::TestCase def test_to_squawk_prepends_the_word_squawk assert_equal "squawk! Hello World", "Hello World".to_squawk end end ``` Run `bin/test` to run the test. This test should fail because we haven't implemented the `to_squawk` method: ``` E Error: CoreExtTest#test_to_squawk_prepends_the_word_squawk: NoMethodError: undefined method `to_squawk' for "Hello World":String bin/test /path/to/yaffle/test/core_ext_test.rb:4 . Finished in 0.003358s, 595.6483 runs/s, 297.8242 assertions/s. 2 runs, 1 assertions, 0 failures, 1 errors, 0 skips ``` Great - now you are ready to start development. In `lib/yaffle.rb`, add `require "yaffle/core_ext"`: ``` # yaffle/lib/yaffle.rb require "yaffle/railtie" require "yaffle/core_ext" module Yaffle # Your code goes here... end ``` Finally, create the `core_ext.rb` file and add the `to_squawk` method: ``` # yaffle/lib/yaffle/core_ext.rb class String def to_squawk "squawk! #{self}".strip end end ``` To test that your method does what it says it does, run the unit tests with `bin/test` from your plugin directory. ``` 2 runs, 2 assertions, 0 failures, 0 errors, 0 skips ``` To see this in action, change to the `test/dummy` directory, start `bin/rails console`, and commence squawking: ``` irb> "Hello World".to_squawk => "squawk! Hello World" ``` [4 Add an "acts\_as" Method to Active Record](#add-an-acts-as-method-to-active-record) -------------------------------------------------------------------------------------- A common pattern in plugins is to add a method called `acts_as_something` to models. In this case, you want to write a method called `acts_as_yaffle` that adds a `squawk` method to your Active Record models. To begin, set up your files so that you have: ``` # yaffle/test/acts_as_yaffle_test.rb require "test_helper" class ActsAsYaffleTest < ActiveSupport::TestCase end ``` ``` # yaffle/lib/yaffle.rb require "yaffle/railtie" require "yaffle/core_ext" require "yaffle/acts_as_yaffle" module Yaffle # Your code goes here... end ``` ``` # yaffle/lib/yaffle/acts_as_yaffle.rb module Yaffle module ActsAsYaffle end end ``` ### [4.1 Add a Class Method](#add-a-class-method) This plugin will expect that you've added a method to your model named `last_squawk`. However, the plugin users might have already defined a method on their model named `last_squawk` that they use for something else. This plugin will allow the name to be changed by adding a class method called `yaffle_text_field`. To start out, write a failing test that shows the behavior you'd like: ``` # yaffle/test/acts_as_yaffle_test.rb require "test_helper" class ActsAsYaffleTest < ActiveSupport::TestCase def test_a_hickwalls_yaffle_text_field_should_be_last_squawk assert_equal "last_squawk", Hickwall.yaffle_text_field end def test_a_wickwalls_yaffle_text_field_should_be_last_tweet assert_equal "last_tweet", Wickwall.yaffle_text_field end end ``` When you run `bin/test`, you should see the following: ``` # Running: ..E Error: ActsAsYaffleTest#test_a_wickwalls_yaffle_text_field_should_be_last_tweet: NameError: uninitialized constant ActsAsYaffleTest::Wickwall bin/test /path/to/yaffle/test/acts_as_yaffle_test.rb:8 E Error: ActsAsYaffleTest#test_a_hickwalls_yaffle_text_field_should_be_last_squawk: NameError: uninitialized constant ActsAsYaffleTest::Hickwall bin/test /path/to/yaffle/test/acts_as_yaffle_test.rb:4 Finished in 0.004812s, 831.2949 runs/s, 415.6475 assertions/s. 4 runs, 2 assertions, 0 failures, 2 errors, 0 skips ``` This tells us that we don't have the necessary models (Hickwall and Wickwall) that we are trying to test. We can easily generate these models in our "dummy" Rails application by running the following commands from the `test/dummy` directory: ``` $ cd test/dummy $ bin/rails generate model Hickwall last_squawk:string $ bin/rails generate model Wickwall last_squawk:string last_tweet:string ``` Now you can create the necessary database tables in your testing database by navigating to your dummy app and migrating the database. First, run: ``` $ cd test/dummy $ bin/rails db:migrate ``` While you are here, change the Hickwall and Wickwall models so that they know that they are supposed to act like yaffles. ``` # test/dummy/app/models/hickwall.rb class Hickwall < ApplicationRecord acts_as_yaffle end ``` ``` # test/dummy/app/models/wickwall.rb class Wickwall < ApplicationRecord acts_as_yaffle yaffle_text_field: :last_tweet end ``` We will also add code to define the `acts_as_yaffle` method. ``` # yaffle/lib/yaffle/acts_as_yaffle.rb module Yaffle module ActsAsYaffle extend ActiveSupport::Concern class_methods do def acts_as_yaffle(options = {}) end end end end ``` ``` # test/dummy/app/models/application_record.rb class ApplicationRecord < ActiveRecord::Base include Yaffle::ActsAsYaffle self.abstract_class = true end ``` You can then return to the root directory (`cd ../..`) of your plugin and rerun the tests using `bin/test`. ``` # Running: .E Error: ActsAsYaffleTest#test_a_hickwalls_yaffle_text_field_should_be_last_squawk: NoMethodError: undefined method `yaffle_text_field' for #<Class:0x0055974ebbe9d8> bin/test /path/to/yaffle/test/acts_as_yaffle_test.rb:4 E Error: ActsAsYaffleTest#test_a_wickwalls_yaffle_text_field_should_be_last_tweet: NoMethodError: undefined method `yaffle_text_field' for #<Class:0x0055974eb8cfc8> bin/test /path/to/yaffle/test/acts_as_yaffle_test.rb:8 . Finished in 0.008263s, 484.0999 runs/s, 242.0500 assertions/s. 4 runs, 2 assertions, 0 failures, 2 errors, 0 skips ``` Getting closer... Now we will implement the code of the `acts_as_yaffle` method to make the tests pass. ``` # yaffle/lib/yaffle/acts_as_yaffle.rb module Yaffle module ActsAsYaffle extend ActiveSupport::Concern class_methods do def acts_as_yaffle(options = {}) cattr_accessor :yaffle_text_field, default: (options[:yaffle_text_field] || :last_squawk).to_s end end end end ``` ``` # test/dummy/app/models/application_record.rb class ApplicationRecord < ActiveRecord::Base include Yaffle::ActsAsYaffle self.abstract_class = true end ``` When you run `bin/test`, you should see the tests all pass: ``` 4 runs, 4 assertions, 0 failures, 0 errors, 0 skips ``` ### [4.2 Add an Instance Method](#add-an-instance-method) This plugin will add a method named 'squawk' to any Active Record object that calls `acts_as_yaffle`. The 'squawk' method will simply set the value of one of the fields in the database. To start out, write a failing test that shows the behavior you'd like: ``` # yaffle/test/acts_as_yaffle_test.rb require "test_helper" class ActsAsYaffleTest < ActiveSupport::TestCase def test_a_hickwalls_yaffle_text_field_should_be_last_squawk assert_equal "last_squawk", Hickwall.yaffle_text_field end def test_a_wickwalls_yaffle_text_field_should_be_last_tweet assert_equal "last_tweet", Wickwall.yaffle_text_field end def test_hickwalls_squawk_should_populate_last_squawk hickwall = Hickwall.new hickwall.squawk("Hello World") assert_equal "squawk! Hello World", hickwall.last_squawk end def test_wickwalls_squawk_should_populate_last_tweet wickwall = Wickwall.new wickwall.squawk("Hello World") assert_equal "squawk! Hello World", wickwall.last_tweet end end ``` Run the test to make sure the last two tests fail with an error that contains "NoMethodError: undefined method `squawk'", then update `acts_as_yaffle.rb` to look like this: ``` # yaffle/lib/yaffle/acts_as_yaffle.rb module Yaffle module ActsAsYaffle extend ActiveSupport::Concern included do def squawk(string) write_attribute(self.class.yaffle_text_field, string.to_squawk) end end class_methods do def acts_as_yaffle(options = {}) cattr_accessor :yaffle_text_field, default: (options[:yaffle_text_field] || :last_squawk).to_s end end end end ``` ``` # test/dummy/app/models/application_record.rb class ApplicationRecord < ActiveRecord::Base include Yaffle::ActsAsYaffle self.abstract_class = true end ``` Run `bin/test` one final time, and you should see: ``` 6 runs, 6 assertions, 0 failures, 0 errors, 0 skips ``` The use of `write_attribute` to write to the field in model is just one example of how a plugin can interact with the model, and will not always be the right method to use. For example, you could also use: ``` send("#{self.class.yaffle_text_field}=", string.to_squawk) ``` [5 Generators](#generators) --------------------------- Generators can be included in your gem simply by creating them in a `lib/generators` directory of your plugin. More information about the creation of generators can be found in the [Generators Guide](generators). [6 Publishing Your Gem](#publishing-your-gem) --------------------------------------------- Gem plugins currently in development can easily be shared from any Git repository. To share the Yaffle gem with others, simply commit the code to a Git repository (like GitHub) and add a line to the `Gemfile` of the application in question: ``` gem "yaffle", git: "https://github.com/rails/yaffle.git" ``` After running `bundle install`, your gem functionality will be available to the application. When the gem is ready to be shared as a formal release, it can be published to [RubyGems](https://rubygems.org). Alternatively, you can benefit from Bundler's Rake tasks. You can see a full list with the following: ``` $ bundle exec rake -T $ bundle exec rake build # Build yaffle-0.1.0.gem into the pkg directory $ bundle exec rake install # Build and install yaffle-0.1.0.gem into system gems $ bundle exec rake release # Create tag v0.1.0 and build and push yaffle-0.1.0.gem to Rubygems ``` For more information about publishing gems to RubyGems, see: [Publishing your gem](https://guides.rubygems.org/publishing). [7 RDoc Documentation](#rdoc-documentation) ------------------------------------------- Once your plugin is stable, and you are ready to deploy, do everyone else a favor and document it! Luckily, writing documentation for your plugin is easy. The first step is to update the README file with detailed information about how to use your plugin. A few key things to include are: * Your name * How to install * How to add the functionality to the app (several examples of common use cases) * Warnings, gotchas or tips that might help users and save them time Once your README is solid, go through and add rdoc comments to all the methods that developers will use. It's also customary to add `# :nodoc:` comments to those parts of the code that are not included in the public API. Once your comments are good to go, navigate to your plugin directory and run: ``` $ bundle exec rake rdoc ``` ### [7.1 References](#references) * [Developing a RubyGem using Bundler](https://github.com/radar/guides/blob/master/gem-development.md) * [Using .gemspecs as Intended](https://yehudakatz.com/2010/04/02/using-gemspecs-as-intended/) * [Gemspec Reference](https://guides.rubygems.org/specification-reference/) Feedback -------- You're encouraged to help improve the quality of this guide. Please contribute if you see any typos or factual errors. To get started, you can read our [documentation contributions](https://edgeguides.rubyonrails.org/contributing_to_ruby_on_rails.html#contributing-to-the-rails-documentation) section. You may also find incomplete content or stuff that is not up to date. Please do add any missing documentation for main. Make sure to check [Edge Guides](https://edgeguides.rubyonrails.org) first to verify if the issues are already fixed or not on the main branch. Check the Ruby on Rails Guides Guidelines for style and conventions. If for whatever reason you spot something to fix but cannot patch it yourself, please [open an issue](https://github.com/rails/rails/issues). And last but not least, any kind of discussion regarding Ruby on Rails documentation is very welcome on the [rubyonrails-docs mailing list](https://discuss.rubyonrails.org/c/rubyonrails-docs).
programming_docs
rails Action View Form Helpers Action View Form Helpers ======================== Forms in web applications are an essential interface for user input. However, form markup can quickly become tedious to write and maintain because of the need to handle form control naming and its numerous attributes. Rails does away with this complexity by providing view helpers for generating form markup. However, since these helpers have different use cases, developers need to know the differences between the helper methods before putting them to use. After reading this guide, you will know: * How to create search forms and similar kind of generic forms not representing any specific model in your application. * How to make model-centric forms for creating and editing specific database records. * How to generate select boxes from multiple types of data. * What date and time helpers Rails provides. * What makes a file upload form different. * How to post forms to external resources and specify setting an `authenticity_token`. * How to build complex forms. Chapters -------- 1. [Dealing with Basic Forms](#dealing-with-basic-forms) * [A Generic Search Form](#a-generic-search-form) * [Helpers for Generating Form Elements](#helpers-for-generating-form-elements) * [Other Helpers of Interest](#other-helpers-of-interest) 2. [Dealing with Model Objects](#dealing-with-model-objects) * [Binding a Form to an Object](#binding-a-form-to-an-object) * [Relying on Record Identification](#relying-on-record-identification) * [How do forms with PATCH, PUT, or DELETE methods work?](#how-do-forms-with-patch-put-or-delete-methods-work-questionmark) 3. [Making Select Boxes with Ease](#making-select-boxes-with-ease) * [Option Groups](#option-groups) * [Select Boxes and Model Objects](#select-boxes-and-model-objects) * [Time Zone and Country Select](#time-zone-and-country-select) 4. [Using Date and Time Form Helpers](#using-date-and-time-form-helpers) * [Select Boxes for Individual Temporal Components](#select-boxes-for-individual-temporal-components) 5. [Choices from a Collection of Arbitrary Objects](#choices-from-a-collection-of-arbitrary-objects) * [The `collection_select` Helper](#the-collection-select-helper) * [The `collection_radio_buttons` Helper](#the-collection-radio-buttons-helper) * [The `collection_check_boxes` Helper](#the-collection-check-boxes-helper) 6. [Uploading Files](#uploading-files) * [What Gets Uploaded](#what-gets-uploaded) 7. [Customizing Form Builders](#customizing-form-builders) 8. [Understanding Parameter Naming Conventions](#understanding-parameter-naming-conventions) * [Basic Structures](#basic-structures) * [Combining Them](#combining-them) * [The `fields_for` Helper](#understanding-parameter-naming-conventions-the-fields-for-helper) 9. [Forms to External Resources](#forms-to-external-resources) 10. [Building Complex Forms](#building-complex-forms) * [Configuring the Model](#configuring-the-model) * [Nested Forms](#nested-forms) * [The Controller](#the-controller) * [Removing Objects](#removing-objects) * [Preventing Empty Records](#preventing-empty-records) * [Adding Fields on the Fly](#adding-fields-on-the-fly) 11. [Using Tag Helpers Without a Form Builder](#using-tag-helpers-without-a-form-builder) 12. [Using `form_tag` and `form_for`](#using-form-tag-and-form-for) This guide is not intended to be a complete documentation of available form helpers and their arguments. Please visit [the Rails API documentation](https://edgeapi.rubyonrails.org/) for a complete reference. [1 Dealing with Basic Forms](#dealing-with-basic-forms) ------------------------------------------------------- The main form helper is [`form_with`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormHelper.html#method-i-form_with). ``` <%= form_with do |form| %> Form contents <% end %> ``` When called without arguments like this, it creates a form tag which, when submitted, will POST to the current page. For instance, assuming the current page is a home page, the generated HTML will look like this: ``` <form accept-charset="UTF-8" action="/" method="post"> <input name="authenticity_token" type="hidden" value="J7CBxfHalt49OSHp27hblqK20c9PgwJ108nDHX/8Cts=" /> Form contents </form> ``` You'll notice that the HTML contains an `input` element with type `hidden`. This `input` is important, because non-GET forms cannot be successfully submitted without it. The hidden input element with the name `authenticity_token` is a security feature of Rails called **cross-site request forgery protection**, and form helpers generate it for every non-GET form (provided that this security feature is enabled). You can read more about this in the [Securing Rails Applications](security#cross-site-request-forgery-csrf) guide. ### [1.1 A Generic Search Form](#a-generic-search-form) One of the most basic forms you see on the web is a search form. This form contains: * a form element with "GET" method, * a label for the input, * a text input element, and * a submit element. To create this form you will use `form_with` and the form builder object it yields. Like so: ``` <%= form_with url: "/search", method: :get do |form| %> <%= form.label :query, "Search for:" %> <%= form.text_field :query %> <%= form.submit "Search" %> <% end %> ``` This will generate the following HTML: ``` <form action="/search" method="get" accept-charset="UTF-8" > <label for="query">Search for:</label> <input id="query" name="query" type="text" /> <input name="commit" type="submit" value="Search" data-disable-with="Search" /> </form> ``` Passing `url: my_specified_path` to `form_with` tells the form where to make the request. However, as explained below, you can also pass ActiveRecord objects to the form. For every form input, an ID attribute is generated from its name (`"query"` in above example). These IDs can be very useful for CSS styling or manipulation of form controls with JavaScript. Use "GET" as the method for search forms. This allows users to bookmark a specific search and get back to it. More generally Rails encourages you to use the right HTTP verb for an action. ### [1.2 Helpers for Generating Form Elements](#helpers-for-generating-form-elements) The form builder object yielded by `form_with` provides numerous helper methods for generating form elements such as text fields, checkboxes, and radio buttons. The first parameter to these methods is always the name of the input. When the form is submitted, the name will be passed along with the form data, and will make its way to the `params` in the controller with the value entered by the user for that field. For example, if the form contains `<%= form.text_field :query %>`, then you would be able to get the value of this field in the controller with `params[:query]`. When naming inputs, Rails uses certain conventions that make it possible to submit parameters with non-scalar values such as arrays or hashes, which will also be accessible in `params`. You can read more about them in chapter [Understanding Parameter Naming Conventions](#understanding-parameter-naming-conventions) of this guide. For details on the precise usage of these helpers, please refer to the [API documentation](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormTagHelper.html). #### [1.2.1 Checkboxes](#checkboxes) Checkboxes are form controls that give the user a set of options they can enable or disable: ``` <%= form.check_box :pet_dog %> <%= form.label :pet_dog, "I own a dog" %> <%= form.check_box :pet_cat %> <%= form.label :pet_cat, "I own a cat" %> ``` This generates the following: ``` <input type="checkbox" id="pet_dog" name="pet_dog" value="1" /> <label for="pet_dog">I own a dog</label> <input type="checkbox" id="pet_cat" name="pet_cat" value="1" /> <label for="pet_cat">I own a cat</label> ``` The first parameter to [`check_box`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-check_box) is the name of the input. The second parameter is the value of the input. This value will be included in the form data (and be present in `params`) when the checkbox is checked. #### [1.2.2 Radio Buttons](#radio-buttons) Radio buttons, while similar to checkboxes, are controls that specify a set of options in which they are mutually exclusive (i.e., the user can only pick one): ``` <%= form.radio_button :age, "child" %> <%= form.label :age_child, "I am younger than 21" %> <%= form.radio_button :age, "adult" %> <%= form.label :age_adult, "I am over 21" %> ``` Output: ``` <input type="radio" id="age_child" name="age" value="child" /> <label for="age_child">I am younger than 21</label> <input type="radio" id="age_adult" name="age" value="adult" /> <label for="age_adult">I am over 21</label> ``` As with `check_box`, the second parameter to [`radio_button`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-radio_button) is the value of the input. Because these two radio buttons share the same name (`age`), the user will only be able to select one of them, and `params[:age]` will contain either `"child"` or `"adult"`. Always use labels for checkbox and radio buttons. They associate text with a specific option and, by expanding the clickable region, make it easier for users to click the inputs. ### [1.3 Other Helpers of Interest](#other-helpers-of-interest) Other form controls worth mentioning are text areas, hidden fields, password fields, number fields, date and time fields, and many more: ``` <%= form.text_area :message, size: "70x5" %> <%= form.hidden_field :parent_id, value: "foo" %> <%= form.password_field :password %> <%= form.number_field :price, in: 1.0..20.0, step: 0.5 %> <%= form.range_field :discount, in: 1..100 %> <%= form.date_field :born_on %> <%= form.time_field :started_at %> <%= form.datetime_local_field :graduation_day %> <%= form.month_field :birthday_month %> <%= form.week_field :birthday_week %> <%= form.search_field :name %> <%= form.email_field :address %> <%= form.telephone_field :phone %> <%= form.url_field :homepage %> <%= form.color_field :favorite_color %> ``` Output: ``` <textarea name="message" id="message" cols="70" rows="5"></textarea> <input type="hidden" name="parent_id" id="parent_id" value="foo" /> <input type="password" name="password" id="password" /> <input type="number" name="price" id="price" step="0.5" min="1.0" max="20.0" /> <input type="range" name="discount" id="discount" min="1" max="100" /> <input type="date" name="born_on" id="born_on" /> <input type="time" name="started_at" id="started_at" /> <input type="datetime-local" name="graduation_day" id="graduation_day" /> <input type="month" name="birthday_month" id="birthday_month" /> <input type="week" name="birthday_week" id="birthday_week" /> <input type="search" name="name" id="name" /> <input type="email" name="address" id="address" /> <input type="tel" name="phone" id="phone" /> <input type="url" name="homepage" id="homepage" /> <input type="color" name="favorite_color" id="favorite_color" value="#000000" /> ``` Hidden inputs are not shown to the user but instead hold data like any textual input. Values inside them can be changed with JavaScript. The search, telephone, date, time, color, datetime, datetime-local, month, week, URL, email, number, and range inputs are HTML5 controls. If you require your app to have a consistent experience in older browsers, you will need an HTML5 polyfill (provided by CSS and/or JavaScript). There is definitely [no shortage of solutions for this](https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills), although a popular tool at the moment is [Modernizr](https://modernizr.com/), which provides a simple way to add functionality based on the presence of detected HTML5 features. If you're using password input fields (for any purpose), you might want to configure your application to prevent those parameters from being logged. You can learn about this in the [Securing Rails Applications](security#logging) guide. [2 Dealing with Model Objects](#dealing-with-model-objects) ----------------------------------------------------------- ### [2.1 Binding a Form to an Object](#binding-a-form-to-an-object) The `:model` argument of `form_with` allows us to bind the form builder object to a model object. This means that the form will be scoped to that model object, and the form's fields will be populated with values from that model object. For example, if we have an `@article` model object like: ``` @article = Article.find(42) # => #<Article id: 42, title: "My Title", body: "My Body"> ``` The following form: ``` <%= form_with model: @article do |form| %> <%= form.text_field :title %> <%= form.text_area :body, size: "60x10" %> <%= form.submit %> <% end %> ``` Outputs: ``` <form action="/articles/42" method="post" accept-charset="UTF-8" > <input name="authenticity_token" type="hidden" value="..." /> <input type="text" name="article[title]" id="article_title" value="My Title" /> <textarea name="article[body]" id="article_body" cols="60" rows="10"> My Body </textarea> <input type="submit" name="commit" value="Update Article" data-disable-with="Update Article"> </form> ``` The are several things to notice here: * The form `action` is automatically filled with an appropriate value for `@article`. * The form fields are automatically filled with the corresponding values from `@article`. * The form field names are scoped with `article[...]`. This means that `params[:article]` will be a hash containing all these field's values. You can read more about the significance of input names in chapter [Understanding Parameter Naming Conventions](#understanding-parameter-naming-conventions) of this guide. * The submit button is automatically given an appropriate text value. Conventionally your inputs will mirror model attributes. However, they don't have to! If there is other information you need you can include it in your form just as with attributes and access it via `params[:article][:my_nifty_non_attribute_input]`. #### [2.1.1 The `fields_for` Helper](#binding-a-form-to-an-object-the-fields-for-helper) You can create a similar binding without actually creating `<form>` tags with the [`fields_for`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-fields_for) helper. This is useful for editing additional model objects with the same form. For example, if you had a `Person` model with an associated `ContactDetail` model, you could create a form for creating both like so: ``` <%= form_with model: @person do |person_form| %> <%= person_form.text_field :name %> <%= fields_for :contact_detail, @person.contact_detail do |contact_detail_form| %> <%= contact_detail_form.text_field :phone_number %> <% end %> <% end %> ``` which produces the following output: ``` <form action="/people" accept-charset="UTF-8" method="post"> <input type="hidden" name="authenticity_token" value="bL13x72pldyDD8bgtkjKQakJCpd4A8JdXGbfksxBDHdf1uC0kCMqe2tvVdUYfidJt0fj3ihC4NxiVHv8GVYxJA==" /> <input type="text" name="person[name]" id="person_name" /> <input type="text" name="contact_detail[phone_number]" id="contact_detail_phone_number" /> </form> ``` The object yielded by `fields_for` is a form builder like the one yielded by `form_with`. ### [2.2 Relying on Record Identification](#relying-on-record-identification) The Article model is directly available to users of the application, so - following the best practices for developing with Rails - you should declare it **a resource**: ``` resources :articles ``` Declaring a resource has a number of side effects. See [Rails Routing from the Outside In](routing#resource-routing-the-rails-default) guide for more information on setting up and using resources. When dealing with RESTful resources, calls to `form_with` can get significantly easier if you rely on **record identification**. In short, you can just pass the model instance and have Rails figure out model name and the rest. In both of these examples, the long and short style have the same outcome: ``` ## Creating a new article # long-style: form_with(model: @article, url: articles_path) # short-style: form_with(model: @article) ## Editing an existing article # long-style: form_with(model: @article, url: article_path(@article), method: "patch") # short-style: form_with(model: @article) ``` Notice how the short-style `form_with` invocation is conveniently the same, regardless of the record being new or existing. Record identification is smart enough to figure out if the record is new by asking `record.persisted?`. It also selects the correct path to submit to, and the name based on the class of the object. If you have a [singular resource](routing#singular-resources), you will need to call `resource` and `resolve` for it to work with `form_with`: ``` resource :geocoder resolve('Geocoder') { [:geocoder] } ``` When you're using STI (single-table inheritance) with your models, you can't rely on record identification on a subclass if only their parent class is declared a resource. You will have to specify `:url`, and `:scope` (the model name) explicitly. #### [2.2.1 Dealing with Namespaces](#dealing-with-namespaces) If you have created namespaced routes, `form_with` has a nifty shorthand for that too. If your application has an admin namespace then ``` form_with model: [:admin, @article] ``` will create a form that submits to the `ArticlesController` inside the admin namespace (submitting to `admin_article_path(@article)` in the case of an update). If you have several levels of namespacing then the syntax is similar: ``` form_with model: [:admin, :management, @article] ``` For more information on Rails' routing system and the associated conventions, please see [Rails Routing from the Outside In](routing) guide. ### [2.3 How do forms with PATCH, PUT, or DELETE methods work?](#how-do-forms-with-patch-put-or-delete-methods-work-questionmark) The Rails framework encourages RESTful design of your applications, which means you'll be making a lot of "PATCH", "PUT", and "DELETE" requests (besides "GET" and "POST"). However, most browsers *don't support* methods other than "GET" and "POST" when it comes to submitting forms. Rails works around this issue by emulating other methods over POST with a hidden input named `"_method"`, which is set to reflect the desired method: ``` form_with(url: search_path, method: "patch") ``` Output: ``` <form accept-charset="UTF-8" action="/search" method="post"> <input name="_method" type="hidden" value="patch" /> <input name="authenticity_token" type="hidden" value="f755bb0ed134b76c432144748a6d4b7a7ddf2b71" /> <!-- ... --> </form> ``` When parsing POSTed data, Rails will take into account the special `_method` parameter and act as if the HTTP method was the one specified inside it ("PATCH" in this example). When rendering a form, submission buttons can override the declared `method` attribute through the `formmethod:` keyword: ``` <%= form_with url: "/posts/1", method: :patch do |form| %> <%= form.button "Delete", formmethod: :delete, data: { confirm: "Are you sure?" } %> <%= form.button "Update" %> <% end %> ``` Similar to `<form>` elements, most browsers *don't support* overriding form methods declared through [formmethod](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/button#attr-formmethod) other than "GET" and "POST". Rails works around this issue by emulating other methods over POST through a combination of [formmethod](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/button#attr-formmethod), [value](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/button#attr-value), and [name](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/button#attr-name) attributes: ``` <form accept-charset="UTF-8" action="/posts/1" method="post"> <input name="_method" type="hidden" value="patch" /> <input name="authenticity_token" type="hidden" value="f755bb0ed134b76c432144748a6d4b7a7ddf2b71" /> <!-- ... --> <button type="submit" formmethod="post" name="_method" value="delete" data-confirm="Are you sure?">Delete</button> <button type="submit" name="button">Update</button> </form> ``` In Rails 6.0 and 5.2, all forms using `form_with` implement `remote: true` by default. These forms will submit data using an XHR (Ajax) request. To disable this include `local: true`. To dive deeper see [Working with JavaScript in Rails](working_with_javascript_in_rails#remote-elements) guide. [3 Making Select Boxes with Ease](#making-select-boxes-with-ease) ----------------------------------------------------------------- Select boxes in HTML require a significant amount of markup - one `<option>` element for each option to choose from. So Rails provides helper methods to reduce this burden. For example, let's say we have a list of cities for the user to choose from. We can use the [`select`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-select) helper like so: ``` <%= form.select :city, ["Berlin", "Chicago", "Madrid"] %> ``` Output: ``` <select name="city" id="city"> <option value="Berlin">Berlin</option> <option value="Chicago">Chicago</option> <option value="Madrid">Madrid</option> </select> ``` We can also designate `<option>` values that differ from their labels: ``` <%= form.select :city, [["Berlin", "BE"], ["Chicago", "CHI"], ["Madrid", "MD"]] %> ``` Output: ``` <select name="city" id="city"> <option value="BE">Berlin</option> <option value="CHI">Chicago</option> <option value="MD">Madrid</option> </select> ``` This way, the user will see the full city name, but `params[:city]` will be one of `"BE"`, `"CHI"`, or `"MD"`. Lastly, we can specify a default choice for the select box with the `:selected` argument: ``` <%= form.select :city, [["Berlin", "BE"], ["Chicago", "CHI"], ["Madrid", "MD"]], selected: "CHI" %> ``` Output: ``` <select name="city" id="city"> <option value="BE">Berlin</option> <option value="CHI" selected="selected">Chicago</option> <option value="MD">Madrid</option> </select> ``` ### [3.1 Option Groups](#option-groups) In some cases we may want to improve the user experience by grouping related options together. We can do so by passing a `Hash` (or comparable `Array`) to `select`: ``` <%= form.select :city, { "Europe" => [ ["Berlin", "BE"], ["Madrid", "MD"] ], "North America" => [ ["Chicago", "CHI"] ], }, selected: "CHI" %> ``` Output: ``` <select name="city" id="city"> <optgroup label="Europe"> <option value="BE">Berlin</option> <option value="MD">Madrid</option> </optgroup> <optgroup label="North America"> <option value="CHI" selected="selected">Chicago</option> </optgroup> </select> ``` ### [3.2 Select Boxes and Model Objects](#select-boxes-and-model-objects) Like other form controls, a select box can be bound to a model attribute. For example, if we have a `@person` model object like: ``` @person = Person.new(city: "MD") ``` The following form: ``` <%= form_with model: @person do |form| %> <%= form.select :city, [["Berlin", "BE"], ["Chicago", "CHI"], ["Madrid", "MD"]] %> <% end %> ``` Outputs a select box like: ``` <select name="person[city]" id="person_city"> <option value="BE">Berlin</option> <option value="CHI">Chicago</option> <option value="MD" selected="selected">Madrid</option> </select> ``` Notice that the appropriate option was automatically marked `selected="selected"`. Since this select box was bound to a model, we didn't need to specify a `:selected` argument! ### [3.3 Time Zone and Country Select](#time-zone-and-country-select) To leverage time zone support in Rails, you have to ask your users what time zone they are in. Doing so would require generating select options from a list of pre-defined [`ActiveSupport::TimeZone`](https://edgeapi.rubyonrails.org/classes/ActiveSupport/TimeZone.html) objects, but you can simply use the [`time_zone_select`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-time_zone_select) helper that already wraps this: ``` <%= form.time_zone_select :time_zone %> ``` Rails *used* to have a `country_select` helper for choosing countries, but this has been extracted to the [country\_select plugin](https://github.com/stefanpenner/country_select). [4 Using Date and Time Form Helpers](#using-date-and-time-form-helpers) ----------------------------------------------------------------------- If you do not wish to use HTML5 date and time inputs, Rails provides alternative date and time form helpers that render plain select boxes. These helpers render a select box for each temporal component (e.g. year, month, day, etc). For example, if we have a `@person` model object like: ``` @person = Person.new(birth_date: Date.new(1995, 12, 21)) ``` The following form: ``` <%= form_with model: @person do |form| %> <%= form.date_select :birth_date %> <% end %> ``` Outputs select boxes like: ``` <select name="person[birth_date(1i)]" id="person_birth_date_1i"> <option value="1990">1990</option> <option value="1991">1991</option> <option value="1992">1992</option> <option value="1993">1993</option> <option value="1994">1994</option> <option value="1995" selected="selected">1995</option> <option value="1996">1996</option> <option value="1997">1997</option> <option value="1998">1998</option> <option value="1999">1999</option> <option value="2000">2000</option> </select> <select name="person[birth_date(2i)]" id="person_birth_date_2i"> <option value="1">January</option> <option value="2">February</option> <option value="3">March</option> <option value="4">April</option> <option value="5">May</option> <option value="6">June</option> <option value="7">July</option> <option value="8">August</option> <option value="9">September</option> <option value="10">October</option> <option value="11">November</option> <option value="12" selected="selected">December</option> </select> <select name="person[birth_date(3i)]" id="person_birth_date_3i"> <option value="1">1</option> ... <option value="21" selected="selected">21</option> ... <option value="31">31</option> </select> ``` Notice that, when the form is submitted, there will be no single value in the `params` hash that contains the full date. Instead, there will be several values with special names like `"birth_date(1i)"`. Active Record knows how to assemble these specially-named values into a full date or time, based on the declared type of the model attribute. So we can pass `params[:person]` to e.g. `Person.new` or `Person#update` just like we would if the form used a single field to represent the full date. In addition to the [`date_select`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-date_select) helper, Rails provides [`time_select`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-time_select) and [`datetime_select`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-datetime_select). ### [4.1 Select Boxes for Individual Temporal Components](#select-boxes-for-individual-temporal-components) Rails also provides helpers to render select boxes for individual temporal components: [`select_year`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#method-i-select_year), [`select_month`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#method-i-select_month), [`select_day`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#method-i-select_day), [`select_hour`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#method-i-select_hour), [`select_minute`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#method-i-select_minute), and [`select_second`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/DateHelper.html#method-i-select_second). These helpers are "bare" methods, meaning they are not called on a form builder instance. For example: ``` <%= select_year 1999, prefix: "party" %> ``` Outputs a select box like: ``` <select name="party[year]" id="party_year"> <option value="1994">1994</option> <option value="1995">1995</option> <option value="1996">1996</option> <option value="1997">1997</option> <option value="1998">1998</option> <option value="1999" selected="selected">1999</option> <option value="2000">2000</option> <option value="2001">2001</option> <option value="2002">2002</option> <option value="2003">2003</option> <option value="2004">2004</option> </select> ``` For each of these helpers, you may specify a date or time object instead of a number as the default value, and the appropriate temporal component will be extracted and used. [5 Choices from a Collection of Arbitrary Objects](#choices-from-a-collection-of-arbitrary-objects) --------------------------------------------------------------------------------------------------- Often, we want to generate a set of choices in a form from a collection of objects. For example, when we want the user to choose from cities in our database, and we have a `City` model like: ``` City.order(:name).to_a # => [ # #<City id: 3, name: "Berlin">, # #<City id: 1, name: "Chicago">, # #<City id: 2, name: "Madrid"> # ] ``` Rails provides helpers that generate choices from a collection without having to explicitly iterate over it. These helpers determine the value and text label of each choice by calling specified methods on each object in the collection. ### [5.1 The `collection_select` Helper](#the-collection-select-helper) To generate a select box for our cities, we can use [`collection_select`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-collection_select): ``` <%= form.collection_select :city_id, City.order(:name), :id, :name %> ``` Output: ``` <select name="city_id" id="city_id"> <option value="3">Berlin</option> <option value="1">Chicago</option> <option value="2">Madrid</option> </select> ``` With `collection_select` we specify the value method first (`:id` in the example above), and the text label method second (`:name` in the example above). This is opposite of the order used when specifying choices for the `select` helper, where the text label comes first and the value second. ### [5.2 The `collection_radio_buttons` Helper](#the-collection-radio-buttons-helper) To generate a set of radio buttons for our cities, we can use [`collection_radio_buttons`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-collection_radio_buttons): ``` <%= form.collection_radio_buttons :city_id, City.order(:name), :id, :name %> ``` Output: ``` <input type="radio" name="city_id" value="3" id="city_id_3"> <label for="city_id_3">Berlin</label> <input type="radio" name="city_id" value="1" id="city_id_1"> <label for="city_id_1">Chicago</label> <input type="radio" name="city_id" value="2" id="city_id_2"> <label for="city_id_2">Madrid</label> ``` ### [5.3 The `collection_check_boxes` Helper](#the-collection-check-boxes-helper) To generate a set of check boxes for our cities (which allows users to choose more than one), we can use [`collection_check_boxes`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-collection_check_boxes): ``` <%= form.collection_check_boxes :city_id, City.order(:name), :id, :name %> ``` Output: ``` <input type="checkbox" name="city_id[]" value="3" id="city_id_3"> <label for="city_id_3">Berlin</label> <input type="checkbox" name="city_id[]" value="1" id="city_id_1"> <label for="city_id_1">Chicago</label> <input type="checkbox" name="city_id[]" value="2" id="city_id_2"> <label for="city_id_2">Madrid</label> ``` [6 Uploading Files](#uploading-files) ------------------------------------- A common task is uploading some sort of file, whether it's a picture of a person or a CSV file containing data to process. File upload fields can be rendered with the [`file_field`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-file_field) helper. ``` <%= form_with model: @person do |form| %> <%= form.file_field :picture %> <% end %> ``` The most important thing to remember with file uploads is that the rendered form's `enctype` attribute **must** be set to "multipart/form-data". This is done automatically if you use a `file_field` inside a `form_with`. You can also set the attribute manually: ``` <%= form_with url: "/uploads", multipart: true do |form| %> <%= file_field_tag :picture %> <% end %> ``` Note that, in accordance with `form_with` conventions, the field names in the two forms above will also differ. That is, the field name in the first form will be `person[picture]` (accessible via `params[:person][:picture]`), and the field name in the second form will be just `picture` (accessible via `params[:picture]`). ### [6.1 What Gets Uploaded](#what-gets-uploaded) The object in the `params` hash is an instance of [`ActionDispatch::Http::UploadedFile`](https://edgeapi.rubyonrails.org/classes/ActionDispatch/Http/UploadedFile.html). The following snippet saves the uploaded file in `#{Rails.root}/public/uploads` under the same name as the original file. ``` def upload uploaded_file = params[:picture] File.open(Rails.root.join('public', 'uploads', uploaded_file.original_filename), 'wb') do |file| file.write(uploaded_file.read) end end ``` Once a file has been uploaded, there are a multitude of potential tasks, ranging from where to store the files (on Disk, Amazon S3, etc), associating them with models, resizing image files, and generating thumbnails, etc. [Active Storage](active_storage_overview) is designed to assist with these tasks. [7 Customizing Form Builders](#customizing-form-builders) --------------------------------------------------------- The object yielded by `form_with` and `fields_for` is an instance of [`ActionView::Helpers::FormBuilder`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html). Form builders encapsulate the notion of displaying form elements for a single object. While you can write helpers for your forms in the usual way, you can also create a subclass of `ActionView::Helpers::FormBuilder`, and add the helpers there. For example, ``` <%= form_with model: @person do |form| %> <%= text_field_with_label form, :first_name %> <% end %> ``` can be replaced with ``` <%= form_with model: @person, builder: LabellingFormBuilder do |form| %> <%= form.text_field :first_name %> <% end %> ``` by defining a `LabellingFormBuilder` class similar to the following: ``` class LabellingFormBuilder < ActionView::Helpers::FormBuilder def text_field(attribute, options={}) label(attribute) + super end end ``` If you reuse this frequently you could define a `labeled_form_with` helper that automatically applies the `builder: LabellingFormBuilder` option: ``` def labeled_form_with(model: nil, scope: nil, url: nil, format: nil, **options, &block) options.merge! builder: LabellingFormBuilder form_with model: model, scope: scope, url: url, format: format, **options, &block end ``` The form builder used also determines what happens when you do: ``` <%= render partial: f %> ``` If `f` is an instance of `ActionView::Helpers::FormBuilder`, then this will render the `form` partial, setting the partial's object to the form builder. If the form builder is of class `LabellingFormBuilder`, then the `labelling_form` partial would be rendered instead. [8 Understanding Parameter Naming Conventions](#understanding-parameter-naming-conventions) ------------------------------------------------------------------------------------------- Values from forms can be at the top level of the `params` hash or nested in another hash. For example, in a standard `create` action for a Person model, `params[:person]` would usually be a hash of all the attributes for the person to create. The `params` hash can also contain arrays, arrays of hashes, and so on. Fundamentally HTML forms don't know about any sort of structured data, all they generate is name-value pairs, where pairs are just plain strings. The arrays and hashes you see in your application are the result of some parameter naming conventions that Rails uses. ### [8.1 Basic Structures](#basic-structures) The two basic structures are arrays and hashes. Hashes mirror the syntax used for accessing the value in `params`. For example, if a form contains: ``` <input id="person_name" name="person[name]" type="text" value="Henry"/> ``` the `params` hash will contain ``` {'person' => {'name' => 'Henry'}} ``` and `params[:person][:name]` will retrieve the submitted value in the controller. Hashes can be nested as many levels as required, for example: ``` <input id="person_address_city" name="person[address][city]" type="text" value="New York"/> ``` will result in the `params` hash being ``` {'person' => {'address' => {'city' => 'New York'}}} ``` Normally Rails ignores duplicate parameter names. If the parameter name ends with an empty set of square brackets `[]` then they will be accumulated in an array. If you wanted users to be able to input multiple phone numbers, you could place this in the form: ``` <input name="person[phone_number][]" type="text"/> <input name="person[phone_number][]" type="text"/> <input name="person[phone_number][]" type="text"/> ``` This would result in `params[:person][:phone_number]` being an array containing the inputted phone numbers. ### [8.2 Combining Them](#combining-them) We can mix and match these two concepts. One element of a hash might be an array as in the previous example, or you can have an array of hashes. For example, a form might let you create any number of addresses by repeating the following form fragment ``` <input name="person[addresses][][line1]" type="text"/> <input name="person[addresses][][line2]" type="text"/> <input name="person[addresses][][city]" type="text"/> <input name="person[addresses][][line1]" type="text"/> <input name="person[addresses][][line2]" type="text"/> <input name="person[addresses][][city]" type="text"/> ``` This would result in `params[:person][:addresses]` being an array of hashes with keys `line1`, `line2`, and `city`. There's a restriction, however: while hashes can be nested arbitrarily, only one level of "arrayness" is allowed. Arrays can usually be replaced by hashes; for example, instead of having an array of model objects, one can have a hash of model objects keyed by their id, an array index, or some other parameter. Array parameters do not play well with the `check_box` helper. According to the HTML specification unchecked checkboxes submit no value. However it is often convenient for a checkbox to always submit a value. The `check_box` helper fakes this by creating an auxiliary hidden input with the same name. If the checkbox is unchecked only the hidden input is submitted and if it is checked then both are submitted but the value submitted by the checkbox takes precedence. ### [8.3 The `fields_for` Helper](#understanding-parameter-naming-conventions-the-fields-for-helper) Let's say we want to render a form with a set of fields for each of a person's addresses. The `fields_for` helper and its `:index` argument can assist with this: ``` <%= form_with model: @person do |person_form| %> <%= person_form.text_field :name %> <% @person.addresses.each do |address| %> <%= person_form.fields_for address, index: address.id do |address_form| %> <%= address_form.text_field :city %> <% end %> <% end %> <% end %> ``` Assuming the person had two addresses, with ids 23 and 45 this would create output similar to this: ``` <form accept-charset="UTF-8" action="/people/1" method="post"> <input name="_method" type="hidden" value="patch" /> <input id="person_name" name="person[name]" type="text" /> <input id="person_address_23_city" name="person[address][23][city]" type="text" /> <input id="person_address_45_city" name="person[address][45][city]" type="text" /> </form> ``` This will result in a `params` hash that looks like ``` {'person' => {'name' => 'Bob', 'address' => {'23' => {'city' => 'Paris'}, '45' => {'city' => 'London'}}}} ``` Rails knows that all these inputs should be part of the person hash because you called `fields_for` on the first form builder. By specifying an `:index` option you're telling Rails that instead of naming the inputs `person[address][city]` it should insert that index surrounded by [] between the address and the city. This is often useful as it is then easy to locate which Address record should be modified. You can pass numbers with some other significance, strings or even `nil` (which will result in an array parameter being created). To create more intricate nestings, you can specify the first part of the input name (`person[address]` in the previous example) explicitly: ``` <%= fields_for 'person[address][primary]', address, index: address.id do |address_form| %> <%= address_form.text_field :city %> <% end %> ``` will create inputs like ``` <input id="person_address_primary_1_city" name="person[address][primary][1][city]" type="text" value="Bologna" /> ``` As a general rule the final input name is the concatenation of the name given to `fields_for`/`form_with`, the index value, and the name of the attribute. You can also pass an `:index` option directly to helpers such as `text_field`, but it is usually less repetitive to specify this at the form builder level rather than on individual input controls. As a shortcut you can append [] to the name and omit the `:index` option. This is the same as specifying `index: address.id` so ``` <%= fields_for 'person[address][primary][]', address do |address_form| %> <%= address_form.text_field :city %> <% end %> ``` produces exactly the same output as the previous example. [9 Forms to External Resources](#forms-to-external-resources) ------------------------------------------------------------- Rails' form helpers can also be used to build a form for posting data to an external resource. However, at times it can be necessary to set an `authenticity_token` for the resource; this can be done by passing an `authenticity_token: 'your_external_token'` parameter to the `form_with` options: ``` <%= form_with url: 'http://farfar.away/form', authenticity_token: 'external_token' do %> Form contents <% end %> ``` Sometimes when submitting data to an external resource, like a payment gateway, the fields that can be used in the form are limited by an external API and it may be undesirable to generate an `authenticity_token`. To not send a token, simply pass `false` to the `:authenticity_token` option: ``` <%= form_with url: 'http://farfar.away/form', authenticity_token: false do %> Form contents <% end %> ``` [10 Building Complex Forms](#building-complex-forms) ---------------------------------------------------- Many apps grow beyond simple forms editing a single object. For example, when creating a `Person` you might want to allow the user to (on the same form) create multiple address records (home, work, etc.). When later editing that person the user should be able to add, remove, or amend addresses as necessary. ### [10.1 Configuring the Model](#configuring-the-model) Active Record provides model level support via the [`accepts_nested_attributes_for`](https://edgeapi.rubyonrails.org/classes/ActiveRecord/NestedAttributes/ClassMethods.html#method-i-accepts_nested_attributes_for) method: ``` class Person < ApplicationRecord has_many :addresses, inverse_of: :person accepts_nested_attributes_for :addresses end class Address < ApplicationRecord belongs_to :person end ``` This creates an `addresses_attributes=` method on `Person` that allows you to create, update, and (optionally) destroy addresses. ### [10.2 Nested Forms](#nested-forms) The following form allows a user to create a `Person` and its associated addresses. ``` <%= form_with model: @person do |form| %> Addresses: <ul> <%= form.fields_for :addresses do |addresses_form| %> <li> <%= addresses_form.label :kind %> <%= addresses_form.text_field :kind %> <%= addresses_form.label :street %> <%= addresses_form.text_field :street %> ... </li> <% end %> </ul> <% end %> ``` When an association accepts nested attributes `fields_for` renders its block once for every element of the association. In particular, if a person has no addresses it renders nothing. A common pattern is for the controller to build one or more empty children so that at least one set of fields is shown to the user. The example below would result in 2 sets of address fields being rendered on the new person form. ``` def new @person = Person.new 2.times { @person.addresses.build } end ``` The `fields_for` yields a form builder. The parameters' name will be what `accepts_nested_attributes_for` expects. For example, when creating a user with 2 addresses, the submitted parameters would look like: ``` { 'person' => { 'name' => 'John Doe', 'addresses_attributes' => { '0' => { 'kind' => 'Home', 'street' => '221b Baker Street' }, '1' => { 'kind' => 'Office', 'street' => '31 Spooner Street' } } } } ``` The keys of the `:addresses_attributes` hash are unimportant, they need merely be different for each address. If the associated object is already saved, `fields_for` autogenerates a hidden input with the `id` of the saved record. You can disable this by passing `include_id: false` to `fields_for`. ### [10.3 The Controller](#the-controller) As usual you need to [declare the permitted parameters](action_controller_overview#strong-parameters) in the controller before you pass them to the model: ``` def create @person = Person.new(person_params) # ... end private def person_params params.require(:person).permit(:name, addresses_attributes: [:id, :kind, :street]) end ``` ### [10.4 Removing Objects](#removing-objects) You can allow users to delete associated objects by passing `allow_destroy: true` to `accepts_nested_attributes_for` ``` class Person < ApplicationRecord has_many :addresses accepts_nested_attributes_for :addresses, allow_destroy: true end ``` If the hash of attributes for an object contains the key `_destroy` with a value that evaluates to `true` (e.g. 1, '1', true, or 'true') then the object will be destroyed. This form allows users to remove addresses: ``` <%= form_with model: @person do |form| %> Addresses: <ul> <%= form.fields_for :addresses do |addresses_form| %> <li> <%= addresses_form.check_box :_destroy %> <%= addresses_form.label :kind %> <%= addresses_form.text_field :kind %> ... </li> <% end %> </ul> <% end %> ``` Don't forget to update the permitted params in your controller to also include the `_destroy` field: ``` def person_params params.require(:person). permit(:name, addresses_attributes: [:id, :kind, :street, :_destroy]) end ``` ### [10.5 Preventing Empty Records](#preventing-empty-records) It is often useful to ignore sets of fields that the user has not filled in. You can control this by passing a `:reject_if` proc to `accepts_nested_attributes_for`. This proc will be called with each hash of attributes submitted by the form. If the proc returns `true` then Active Record will not build an associated object for that hash. The example below only tries to build an address if the `kind` attribute is set. ``` class Person < ApplicationRecord has_many :addresses accepts_nested_attributes_for :addresses, reject_if: lambda {|attributes| attributes['kind'].blank?} end ``` As a convenience you can instead pass the symbol `:all_blank` which will create a proc that will reject records where all the attributes are blank excluding any value for `_destroy`. ### [10.6 Adding Fields on the Fly](#adding-fields-on-the-fly) Rather than rendering multiple sets of fields ahead of time you may wish to add them only when a user clicks on an "Add new address" button. Rails does not provide any built-in support for this. When generating new sets of fields you must ensure the key of the associated array is unique - the current JavaScript date (milliseconds since the [epoch](https://en.wikipedia.org/wiki/Unix_time)) is a common choice. [11 Using Tag Helpers Without a Form Builder](#using-tag-helpers-without-a-form-builder) ---------------------------------------------------------------------------------------- In case you need to render form fields outside of the context of a form builder, Rails provides tag helpers for common form elements. For example, [`check_box_tag`](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormTagHelper.html#method-i-check_box_tag): ``` <%= check_box_tag "accept" %> ``` Output: ``` <input type="checkbox" name="accept" id="accept" value="1" /> ``` Generally, these helpers have the same name as their form builder counterparts plus a `_tag` suffix. For a complete list, see the [`FormTagHelper` API documentation](https://edgeapi.rubyonrails.org/classes/ActionView/Helpers/FormTagHelper.html). [12 Using `form_tag` and `form_for`](#using-form-tag-and-form-for) ------------------------------------------------------------------ Before `form_with` was introduced in Rails 5.1 its functionality used to be split between [`form_tag`](https://api.rubyonrails.org/v5.2/classes/ActionView/Helpers/FormTagHelper.html#method-i-form_tag) and [`form_for`](https://api.rubyonrails.org/v5.2/classes/ActionView/Helpers/FormHelper.html#method-i-form_for). Both are now soft-deprecated. Documentation on their usage can be found in [older versions of this guide](https://guides.rubyonrails.org/v5.2/form_helpers.html). Feedback -------- You're encouraged to help improve the quality of this guide. Please contribute if you see any typos or factual errors. To get started, you can read our [documentation contributions](https://edgeguides.rubyonrails.org/contributing_to_ruby_on_rails.html#contributing-to-the-rails-documentation) section. You may also find incomplete content or stuff that is not up to date. Please do add any missing documentation for main. Make sure to check [Edge Guides](https://edgeguides.rubyonrails.org) first to verify if the issues are already fixed or not on the main branch. Check the Ruby on Rails Guides Guidelines for style and conventions. If for whatever reason you spot something to fix but cannot patch it yourself, please [open an issue](https://github.com/rails/rails/issues). And last but not least, any kind of discussion regarding Ruby on Rails documentation is very welcome on the [rubyonrails-docs mailing list](https://discuss.rubyonrails.org/c/rubyonrails-docs).
programming_docs